Self-structuring hidden control neural model for speech recognition

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review


The majority of neural models for pattern recognition have fixed architecture during training. A typical consequence is non-opthal and often too large networks. In this paper we propose a Self-structuring Hidden Control (SHC) neural model for pattern recognition, which establishes a near optimal architecture during training. We typically achieve a network architecture reduction of approx. 80-90% in terms of the number of hidden Processing Elements (PE). The SHC model combines self-structuring architecture generation with non-linear prediction and hidden Markov modelling. The paper presents a theorem for self-structuring neural models stating that these models are universal approhators and thus relevant for real-world pattern recognition. Using SHC models containing as few as five hidden PES each for an isolated word recognition task resulted in a recognition rate of 98.4%. SHC models can furthermore be applied to continuous speech recognition.
Original languageEnglish
Title of host publicationICASSP-92
PublisherIEEE Press
Publication date1992
ISBN (Print)0-7803-0532-9
Publication statusPublished - 1992
Externally publishedYes
EventIEEE International Conference on Acoustics, Speech, and Signal Processing - San Francisco, United States
Duration: 23 Mar 199226 Mar 1992


ConferenceIEEE International Conference on Acoustics, Speech, and Signal Processing
Country/TerritoryUnited States
CitySan Francisco


Dive into the research topics of 'Self-structuring hidden control neural model for speech recognition'. Together they form a unique fingerprint.

Cite this