Abstract
In this study, a hybrid learning algorithm for training the Recurrent Fuzzy Neural Network (RFNN) is introduced. This learning algorithm aims to solve main problems of the Gradient Descent (GD) based methods for the optimization of the RFNNs, which are instability, local minima and the problem of generalization of trained network to the test data. PSO as a global optimizer is used to optimize the parameters of the membership functions and the GD algorithm is used to optimize the consequent part's parameters of RFNN. As PSO is a derivative free optimization technique, a simpler method for the train of RFNN is achieved. Also the results are compared to GD algorithm.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2007 IEEE International Conference on Mechatronics and Automation, ICMA 2007 |
Number of pages | 6 |
Publication date | 2007 |
Pages | 2598-2603 |
Article number | 4303966 |
ISBN (Print) | 1424408288, 9781424408283 |
DOIs | |
Publication status | Published - 2007 |
Externally published | Yes |
Event | 2007 IEEE International Conference on Mechatronics and Automation - Harbin, China Duration: 5 Aug 2007 → 8 Aug 2007 https://ieeexplore.ieee.org/xpl/conhome/4303487/proceeding |
Conference
Conference | 2007 IEEE International Conference on Mechatronics and Automation |
---|---|
Country/Territory | China |
City | Harbin |
Period | 05/08/2007 → 08/08/2007 |
Sponsor | IEEE, Harbin Engineering University, Kagawa University, National Natural Science Foundation of China, Ministry of Education, China |
Internet address |
Keywords
- Gradient descent
- Identification
- Particle swarm optimization
- Prediction
- Recurrent fuzzy neural network