Loading

Learning Rate Scheduling Policies
A.Agnes Lydia1, F. Sagayaraj Francis2

1A. Agnes Lydia, Ph.D., in Computer Science and Engineering at Pondicherry Engineering College, Pondicherry, India.
2F. Sagayaraj Francis, Professor in Department of Computer Science and Engineering at Pondicherry Engineering College, Pondicherry, India.

Manuscript received on October 12, 2019. | Revised Manuscript received on 22 October, 2019. | Manuscript published on November 10, 2019. | PP: 3641-3644 | Volume-9 Issue-1, November 2019. | Retrieval Number: A4648119119/2019©BEIESP | DOI: 10.35940/ijitee.A4648.119119
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: With the availability of high processing capability hardwares at less expensive prices, it is possible to successfully train multi-layered neural networks. Since then, several training algorithms have been developed, from algorithms which are statically initialized to algorithms which adaptively change. It is observed that to improve the training process of neural networks, the hyper-parameters are to be fine tuned. Learning Rate, Decay rate, number of epochs, number of hidden layers and number of neurons in the network are some of the hyper-parameters in concern. Of these, the Learning rate plays a crucial role in enhancing the learning capability of the network. Learning rate is the value by which the weights are adjusted in a neural network with respect to the gradient descending towards the expected optimum value. This paper discusses four types of learning rate scheduling which helps to find the best learning rates in less number of epochs. Following these scheduling methods, facilitates to find better initial learning rate value and step-wise updation during the later phase of the training process. In addition the discussed learning rate schedules are demonstrated using COIL-100, Caltech-101 and CIFAR-10 datasets trained on ResNet. The performance is evaluated using the metrics, Precision, Recall and F1-Score. The results analysis show that, depending on the nature of the dataset, the performance of the Learning Rate Scheduling policy varies. Hence the choice of the scheduling policy to train a neural network is made, based on the data.
Keywords: Decay Rate, Hyper-Parameter Tuning, Learning Rates, Neural Network, Scheduling Policy.
Scope of the Article: e-Learning