| Peer-Reviewed

Convergence of Online Gradient Method for Pi-sigma Neural Networks with Inner-penalty Terms

Received: 14 March 2016    Accepted: 30 March 2016    Published: 10 May 2016
Views:       Downloads:
Abstract

This paper investigates an online gradient method with inner- penalty for a novel feed forward network it is called pi-sigma network. This network utilizes product cells as the output units to indirectly incorporate the capabilities of higher-order networks while using a fewer number of weights and processing units. Penalty term methods have been widely used to improve the generalization performance of feed forward neural networks and to control the magnitude of the network weights. The monotonicity of the error function and weight boundedness with inner- penalty term and both weak and strong convergence theorems in the training iteration are proved.

Published in American Journal of Neural Networks and Applications (Volume 2, Issue 1)
DOI 10.11648/j.ajnna.20160201.11
Page(s) 1-5
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Convergence, Pi-sigma Network, Online Gradient Method, Inner-penalty, Boundedness

References
[1] A J Hussaina and P Liatsisb, Recurrent pi-sigma networks for DPCM image coding. Neurocomputing, 55(2002) 363-382.
[2] Y Shin and J Ghosh, The pi-sigma network: An efficient higher-order neural network for pattern classification and function approximation. International Joint Conference on Neural Networks, 1(1991) 13–18.
[3] P L Bartlett, For valid generalization, the size of the weights is more important than the size of the network, Advances in Neural Information Processing Systems 9 (1997) 134–140.
[4] L J Jiang, F Xu and S R Piao, Application of pi-sigma neural network to real-time classification of seafloor sediments. Applied Acoustics, 24(2005) 346–350.
[5] R Reed, Pruning algorithms-a survey. IEEE Transactions on Neural Networks 8 (1997) 185–204.
[6] G Hinton, Connectionist learning procedures, Artificial Intelligence 40(1989)185-243.
[7] S Geman, E Bienenstock, R Doursat, Neural networks and the bias/variance dilemma, Neural Computation 4 (1992) 1–58.
[8] S Loone and G Irwin, Improving neural network training solutions using regularisation,Neurocomputing37(2001)71-90.
[9] A S Weigend, D E Rumelhart and B A Huberman, Generalization by weight-elimination applied to currency exchange rate prediction. Proc. Intl Joint Conf. on Neural Networks 1(Seatle, 19916) 837- 841.
[10] Y Shin and J Ghosh, Approximation of multivariate functions using ridge polynomial networks, International Joint Conference on Neural Networks 2 (1992) 380-385.
[11] M Sinha, K Kumar and P K Kalra, Some new neural network architectures with improved learning schemes. Soft Computing, 4 (2000) 214-223.
[12] R Setiono, A penalty-function approach for pruning feed forward neural networks, Neural Networks 9 (1997) 185–204.
[13] W Wu and Y S Xu, Deterministic convergence of an online gradient method for neural networks, Journal of Computational and Applied Mathematics 144 (1-2) (2002) 335-347.
[14] H S Zhang and W Wu, Boundedness and convergence of online gradient method with penalty for linear output feed forward neural networks, Neural Process Letters 29 (2009) 205–212.
[15] H F Lu, W Wu, C Zhang and X Yan, Convergence of Gradient Descent Algorithm for Pi- Sigma Neural Networks, Journal of Information and Computational Science 3: 3 (2006) 503-509.
[16] YX Yuan and WY Sun, Optimization Theory and Methods, Science Press, Beijing, 2001.
[17] J Kong and W Wu, Online gradient methods with a punishing term for neural networks. Northeast Math. J 1736(2001) 371-378.
[18] W Wu, G R Feng and X Z Li, Training multiple perceptrons via minimization of sum of ridge functions, Advances in Computational Mathematics 17 (2002) 331-347.
Cite This Article
  • APA Style

    Kh. Sh. Mohamed, Xiong Yan, Y. Sh. Mohammed, Abd-Elmoniem A. Elzain, Habtamu Z. A., et al. (2016). Convergence of Online Gradient Method for Pi-sigma Neural Networks with Inner-penalty Terms. American Journal of Neural Networks and Applications, 2(1), 1-5. https://doi.org/10.11648/j.ajnna.20160201.11

    Copy | Download

    ACS Style

    Kh. Sh. Mohamed; Xiong Yan; Y. Sh. Mohammed; Abd-Elmoniem A. Elzain; Habtamu Z. A., et al. Convergence of Online Gradient Method for Pi-sigma Neural Networks with Inner-penalty Terms. Am. J. Neural Netw. Appl. 2016, 2(1), 1-5. doi: 10.11648/j.ajnna.20160201.11

    Copy | Download

    AMA Style

    Kh. Sh. Mohamed, Xiong Yan, Y. Sh. Mohammed, Abd-Elmoniem A. Elzain, Habtamu Z. A., et al. Convergence of Online Gradient Method for Pi-sigma Neural Networks with Inner-penalty Terms. Am J Neural Netw Appl. 2016;2(1):1-5. doi: 10.11648/j.ajnna.20160201.11

    Copy | Download

  • @article{10.11648/j.ajnna.20160201.11,
      author = {Kh. Sh. Mohamed and Xiong Yan and Y. Sh. Mohammed and Abd-Elmoniem A. Elzain and Habtamu Z. A. and Abdrhaman M. Adam},
      title = {Convergence of Online Gradient Method for Pi-sigma Neural Networks with Inner-penalty Terms},
      journal = {American Journal of Neural Networks and Applications},
      volume = {2},
      number = {1},
      pages = {1-5},
      doi = {10.11648/j.ajnna.20160201.11},
      url = {https://doi.org/10.11648/j.ajnna.20160201.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.ajnna.20160201.11},
      abstract = {This paper investigates an online gradient method with inner- penalty for a novel feed forward network it is called pi-sigma network. This network utilizes product cells as the output units to indirectly incorporate the capabilities of higher-order networks while using a fewer number of weights and processing units. Penalty term methods have been widely used to improve the generalization performance of feed forward neural networks and to control the magnitude of the network weights. The monotonicity of the error function and weight boundedness with inner- penalty term and both weak and strong convergence theorems in the training iteration are proved.},
     year = {2016}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Convergence of Online Gradient Method for Pi-sigma Neural Networks with Inner-penalty Terms
    AU  - Kh. Sh. Mohamed
    AU  - Xiong Yan
    AU  - Y. Sh. Mohammed
    AU  - Abd-Elmoniem A. Elzain
    AU  - Habtamu Z. A.
    AU  - Abdrhaman M. Adam
    Y1  - 2016/05/10
    PY  - 2016
    N1  - https://doi.org/10.11648/j.ajnna.20160201.11
    DO  - 10.11648/j.ajnna.20160201.11
    T2  - American Journal of Neural Networks and Applications
    JF  - American Journal of Neural Networks and Applications
    JO  - American Journal of Neural Networks and Applications
    SP  - 1
    EP  - 5
    PB  - Science Publishing Group
    SN  - 2469-7419
    UR  - https://doi.org/10.11648/j.ajnna.20160201.11
    AB  - This paper investigates an online gradient method with inner- penalty for a novel feed forward network it is called pi-sigma network. This network utilizes product cells as the output units to indirectly incorporate the capabilities of higher-order networks while using a fewer number of weights and processing units. Penalty term methods have been widely used to improve the generalization performance of feed forward neural networks and to control the magnitude of the network weights. The monotonicity of the error function and weight boundedness with inner- penalty term and both weak and strong convergence theorems in the training iteration are proved.
    VL  - 2
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Mathematical Department, College of Science, Dalanj University, Dalanj, Sudan; School of Mathematical Sciences, Dalian University of Technology, Dalian, China

  • School of Science, Liaoning University of Science & Technology, Anshan, China

  • Physics Department, College of Education, Dalanj University, Dalanj, Sudan; Department of Physics, College of Science & Art, Qassim University, Oklat Al- Skoor, Saudi Arabia

  • Department of Physics, College of Science & Art, Qassim University, Oklat Al- Skoor, Saudi Arabia; Department Department of Physics, University of Kassala, Kassala, Sudan

  • School of Mathematical Sciences, Dalian University of Technology, Dalian, China

  • School of Mathematical Sciences, Dalian University of Technology, Dalian, China

  • Sections