Abstract
The majority of neural models for pattern recognition have fixed architecture during training. A typical consequence is non-optimal and often too large networks. In this paper we propose a Self-structuring Hidden Control (SHC)
neural model for pattern recognition, which establishes a near optimal architecture during training. We typically achieve a significant network architecture reduction
in terms of the number of hidden Processing Elements (PE). The SHC model combines self-structuring architecture generation with non-linear prediction and hidden Markov modelling. The paper presents a theorem for
self-structuring neural models stating that these models are universal approximators and thus relevant for real-world pattern recognition. Using SHC models containing as few as five hidden PES each for an isolated word recognition task resulted in a recognition rate of 98.4%. SHC models can furthermore be applied to continuous
speech recognition.
Original language | English |
---|---|
Title of host publication | Neural Networks for Signal Processing II : Proceedings of the 1992 IEEE-SP Workshop |
Publisher | IEEE Press |
Publication date | 1992 |
Pages | 149-156 |
ISBN (Print) | 0-7803-0557-4 |
DOIs | |
Publication status | Published - 1992 |
Externally published | Yes |
Event | 1992 IEEE Workshop on Neural Networks for Signal Processing - Hotel Marielyst, Helsingoer, Denmark Duration: 31 Aug 1992 → 2 Sept 1992 http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=631 |
Conference
Conference | 1992 IEEE Workshop on Neural Networks for Signal Processing |
---|---|
Location | Hotel Marielyst |
Country/Territory | Denmark |
City | Helsingoer |
Period | 31/08/1992 → 02/09/1992 |
Internet address |