简介:ProfessorKowk-faiSo,theeditor-in-chiefofNeuralRegenerationResearch,hasbeennamedaFellowoftheNationalAcademyofInventors(NAI)ProfessorKwok-faiSo,DepartmentofOphthalmology,LiKaShing,FacultyofMedicine,TheUniversityofHongKong(HKU),hasbeennamedaFellowoftheNationalAcademyofInventors(NAI).
简介:Thispaperproposesanewneuralfuzzyinferencesystemthatmainlyconsistsoffourparts.Thefirstpartisabouthowtouseneuralnetworktoexpresstherelationwithinafuzzyrule.Thesecondpartisthesimplificationofthefirstpart,andexperimentsshowthatthesesimplificationswork.Onthecontrarytothesecondpart,thethirdpartistheenhancementofthefirstpartanditcanbeusedwhenthefirstpartcannotworkverywellinthefuzzyinferencealgorithm,whichwouldbeintroducedinthefourthpart.Finally,thefourthpart"neuralfuzzyinferencealgorithm"isbeenintroduced.Itcaninferencethenewmembershipfunctionoftheoutputbasedonpreviousfuzzyrules.Theaccuracyofthefuzzyinferencealgorithmisdependentonneuralnetworkgeneralizationability.Evenifthegeneralizationabilityoftheneuralnetworkweusedisgood,westillgetinaccurateresultssincethenewcomingrulemaynotberelatedtoanyofthepreviousrules.Experimentsshowthisalgorithmissuccessfulinsituationswhichsatisfytheseconditions.
简介:ThetypicalBDI(beliefdesireintention)modelofagentisnotefficientlycomputableandthestrictlogicexpressionisnoteasilyapplicabletotheAUV(autonomousunderwatervehicle)domainwithuncertainties.Inthispaper,anAUVfuzzyneuralBDImodelisproposed.Themodelisafuzzyneuralnetworkcomposedoffivelayers:input(beliefsanddesires),fuzzification,commitment,fuzzyintention,anddefuzzificationlayer.Inthemodel,thefuzzycommitmentrulesandneuralnetworkarecombinedtoformintentionsfrombeliefsanddesires.ThemodelisdemonstratedbysolvingPEG(pursuit-evasiongame),andthesimulationresultissatisfactory.
简介:Inthispaper,weproposetwoweightedlearningmethodsfortheconstructionofsinglehiddenlayerfeedforwardneuralnetworks.Bothmethodsincorporateweightedleastsquares.Ourideaistoallowthetraininginstancesnearertothequerytoofferbiggercontributionstotheestimatedoutput.Byminimizingtheweightedmeansquareerrorfunction,optimalnetworkscanbeobtained.Theresultsofanumberofexperimentsdemonstratetheeffectivenessofourproposedmethods.
简介:Inthispaper,weintroduceatypeofapproximationoperatorsofneuralnetworkswithsigmodalfunctionsoncompactintervals,andobtainthepointwiseanduniformestimatesoftheapproximation.Toimprovetheapproximationrate,wefurtherintroduceatypeofcombinationsofneuralnetworks.Moreover,weshowthatthederivativesoffunctionscanalsobesimultaneouslyapproximatedbythederivativesofthecombinations.Wealsoapplyourmethodtoconstructapproximationoperatorsofneuralnetworkswithsigmodalfunctionsoninfiniteintervals.
简介:Basedoncurrentresearchonapplicationsofchaoticneuronnetworkforinformationprocessing,thestabilityandconvergenceofchaoticneuronnetworkareprovedfromtheviewpointofenergyfunction.Moreover,anewauto-associativematrixisdevisedforartificialneuralnetworkcomposedofchaoticneurons,thus,animprovedchaoticneuronnetworkforassociativememoryisbuiltup.Finally,theassociativerecallingprocessofthenetworkisanalyzedindetailandexplanationsofimprovementaregiven.