Abstract
A neural architecture for adaptive filtering which incorporates a modularization principle is proposed. It facilitates a sparse parameterization, i.e. fewer parameters have to be estimated in a supervised training procedure. The main idea is to use a preprocessor which determines the dimension of the input space and can be designed independently of the subsequent nonlinearity. Two suggestions for the preprocessor are presented: the derivative preprocessor and the principal component analysis. A novel implementation of fixed Volterra nonlinearities is given. It forces the boundedness of the polynominals by scaling and limiting the inputs signals. The nonlinearity is constructed from Chebychev polynominals. The authors apply a second-order algorithm for updating the weights for adaptive nonlinearities. Finally the simulations indicate that the two kinds of preprocessing tend to complement each other while there is no obvious difference between the performance of the ANL and FNL
Original language | English |
---|---|
Title of host publication | Proceedings of the IEEE Workshop Neural Networks for Signal Processing |
Publisher | IEEE |
Publication date | 1991 |
Pages | 533-542 |
ISBN (Print) | 0-7803-0118-8 |
DOIs | |
Publication status | Published - 1991 |
Event | 1991 IEEE Workshop on Neural Networks for Signal Processing - Princeton, United States Duration: 30 Sept 1991 → 1 Oct 1991 Conference number: 1 https://ieeexplore.ieee.org/xpl/conhome/574/proceeding |
Conference
Conference | 1991 IEEE Workshop on Neural Networks for Signal Processing |
---|---|
Number | 1 |
Country/Territory | United States |
City | Princeton |
Period | 30/09/1991 → 01/10/1991 |
Internet address |