Optimization of Deep Learning using Various Optimizers, Loss Functions and Dropout
S.V.G. Reddy1, K. Thammi Reddy2, V. Valli Kumari3
1S.V.G. Reddy, Associate Professor, Department of Computer Science and Engineering, Gandhi Institute of Technology and Management, University, Andhra Pradesh, India.
2Prof. K. Thammi Reddy, Professor, Department of Computer Science and Engineering, Gandhi Institute of Technology and Management University, Andhra Pradesh, India.
3Prof. V.Valli Kumari, Professor, Department of Computer Science and Engineering, College of Engineering, Andhra University, Andhra Pradesh, India.
Manuscript received on 10 December 2018 | Revised Manuscript received on 17 December 2018 | Manuscript Published on 30 December 2018 | PP: 272-279 | Volume-8 Issue- 2S December 2018 | Retrieval Number: BS2726128218/19©BEIESP
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open-access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: Deep Learning is gaining lot of prominence due to its break through results in various fields like Computer Vision, Natural Language Processing, Time Series Analysis, Health Care etc. Earlier, the Deep Learning was implemented using the batch and stochastic gradient descent algorithms and some optimizers which lead to very less performance of the models. But today, lot of work is going on for the enhancement of the performance of Deep Learning using various optimization techniques. So, in this context, It is proposed to build a Deep Learning model using various Optimizers (Adagrad, RmsProp, Adam), Loss functions (mean squared error, binary cross entropy) and Dropout concept for the Convolutional neural networks and Recurrent neural networks and verify the performance such as Accuracy and Loss of the model. The proposed model has achieved maximum Accuracy when Adam optimizer and mean squared error loss function are applied on convolutional neural networks and the model is run with minimum Loss when the same Adam optimizer and mean squared error loss function are applied on Recurrent neural networks. While performing the Regularization of the model, the maximum Accuracy is achieved when the Dropout with a minimum fraction ‘p’ of nodes is applied on convolutional neural networks and the model has run with minimum Loss when the same dropout value is applied on Recurrent neural networks.
Keywords: Deep Learning, Convolutional Neural Networks, CNN, Recurrent Neural Networks, RNN, Computer Vision, Natural Language Processing, Time Series Analysis.
Scope of the Article: Deep Learning