Loading

Asymmetric Sigmoidal Activation Function for Feed-Forward Artificial Neural Networks
Ruchi Sehrawat1, Pravin Chandra2, Udayan Ghose3

1S.Dhanasekar*, Vellore Institute of Technology , Chennai Campus, Chennai (Tamil Nadu), India.
Manuscript received on September 16, 2019. | Revised Manuscript received on 24 September, 2019. | Manuscript published on October 10, 2019. | PP: 852-858 | Volume-8 Issue-12, October 2019. | Retrieval Number: L33101081219/2019©BEIESP | DOI: 10.35940/ijitee.L3310.1081219
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Artificial neural networks of the feed – forward kind, are an established technique under the supervised learning paradigm for the solution of learning tasks. The mathematical result that allows one to assert the usefulness of this technique is that these networks can approximate any continuous function to the desired degree. The requirement imposed on these networks is to have non-linear functions of a specific kind at the hidden nodes of the network. In general, sigmoidal non-linearities, called activation functions, are generally used. In this paper we propose an asymmetric activation function. The networks using the proposed activation function are compared against those using the generally used logistic and the hyperbolic tangent activation function for the solution of 12 function approximation problems. The results obtained allow us to infer that the proposed activation function, in general, reaches deeper minima of the error measures and has better generalization error values.
Keywords: Sigmoidal Activation Function, Transfer Function, Squashing Function, Feed Forward Artificial Activation function.
Scope of the Article: Artificial Intelligence