Adaptive Moment Estimation To Minimize Square Error In Backpropagation Algorithm
Keywords:Gradient Descent Backpropagation, Adaptive Moment Estimation, Minimize Square Error
Back - propagation Neural Network has weaknesses such as errors of gradient descent training slowly of error function, training time is too long and is easy to fall into local optimum. Back - propagation algorithm is one of the artificial neural network training algorithm that has weaknesses such as the convergence of long, over-fitting and easy to get stuck in local optima. Back - propagation is used to minimize errors in each iteration. This paper investigates and evaluates the performance of Adaptive Moment Estimation (ADAM) to minimize the squared error in back - propagation gradient descent algorithm. Adaptive Estimation moment can speed up the training and achieve the level of acceleration to get linear. ADAM can adapt to changes in the system, and can optimize many parameters with a low calculation. The results of the study indicate that the performance of adaptive moment estimation can minimize the squared error in the output of neural networks.
Guan, N., Shan, L., Yang, C., Xu W., & Zhang, M., 2017. Delay Compensated Asynchronous Adam Algorithm for Deep Neural Networks. Proceedings of the IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications, pp. 852 – 859.
Wakitani, S., Yamamoto, T., & Ishimura, A., 2017. Study on an adaptive GMDH-PID controller using adaptive moment estimation. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, pp. 1587 – 1591.
Wu, M., Guo, S., Chen, X., Xing, N., & Zhong, C., 2016. LM–BP based operation quality assessment method for OTN in Smart grid. Proceedings of the IEEE Network Operations and Management Symposium.
Ahmad, F., Isa, N., A., M., Osman M., K., & Hussain, Z., 2010. Performance comparison of gradient descent and Genetic Algorithm based Artificial Neural Networks training. Proceedings of the IEEE International Conference on Intelligent Systems Design and Applications, pp. 604 – 609.
Popa, C-A., 2014. Enhanced Gradient Descent Algorithms for Complex-Valued Neural Networks. Proceedings of the IEEE International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, pp. 272 – 279.
Singh, B., K., Verma, K., & Thoke, A., S., 2015. Adaptive Gradient Descent Backpropagation for Classification of Breast Tumor in Ultrasound Imaging. Proceedings of the Elsevier International Conference on Information and Communication Technologies 46(9): 1601 – 1609.
Heravi, A., R., & Hodtani, G., A., 2018. A New Correntropy-Based Conjugate Gradient Backpropagation Algorithm for Improving Training in Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 29(12) : 6252 – 6263.
Li, Y., Zhu, L., Zhou, L-j., & Jiang, J., 2011. Study on the BP–GA model and its application in water quality assessment. Proceedings of the IEEE International Symposium on Water Resource and Environmental Protection, pp. 2781 – 2784.
Achkar, R., Geagea, R., Mehio., & Kmeish, W., 2016. SmartCoach personal gym trainer: An Adaptive Modified Backpropagation approach. Proceedings of the IEEE International Multidisciplinary Conference on Engineering Technology, pp. 1 – 6.
Mammadli, S., 2017. Financial time series prediction using artificial neural network based on Levenberg–Marquardt algorithm. Proceedings of the Elsevier International Conference on Theory and Application of Soft Computing, Computing with Words and Perception 120(6): 602 – 607.
Andayani, U., Nababan, E., B., Siregar, B., Muchtar, M., A., Nasution, T., H., & Siregar, I., 2017. Optimization backpropagation algorithm based on Nguyen–Widrow adaptive weight and adaptive learning rate. Proceeding of the IEEE International Conference on Industrial Engineering and Application, pp. 363 – 367.
Indolia, S., Goswami, A., K., Mishra, S., P., & Asopa, P., 2018. Conceptual Understanding of Convolutional Neural Network – A Deep Learning Approach. Proceedings of the Elsevier International Conference on Computational Intelligence and Data Science 132(10): 679 - 688.
Srinivasan, N., Ravichandran, V., Chan, K., L., Vidhya, J., R., Ramakirishnan, S., & Krishnan, S., M., 2002. Exponentiated backpropagation algorithm for multilayer feedforward neural networks. Proceedings of the IEEE Proceedings of the 9th International Conference on Neural Information Processing, 2002. ICONIP '02, pp. 327 – 331.
Paulin, F., & Santhakumaran, A., 2010. Back Propagation Neural Network by Comparing Hidden Neuron: Case study on Breast Cancer Diagnosis. International Journal of Computer Application 2(4): 40 – 44.
Chen, C., C., Kuo, C., Kuo, S., Y., & Chou, Y., H., 2015. Dynamic Normalization BPN for Stock Price Forecasting. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, pp. 2855 - 2860.
How to Cite
Copyright (c) 2020 Data Science: Journal of Computing and Applied Informatics
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
The Authors submitting a manuscript do so on the understanding that if accepted for publication, copyright of the article shall be assigned to Data Science: Journal of Informatics Technology and Computer Science (JoCAI) and Faculty of Computer Science and Information Technology as well as TALENTA Publisher Universitas Sumatera Utara as publisher of the journal.
Copyright encompasses exclusive rights to reproduce and deliver the article in all form and media. The reproduction of any part of this journal, its storage in databases and its transmission by any form or media, will be allowed only with a written permission fromData Science: Journal of Informatics Technology and Computer Science (JoCAI).