Loading

Determining Hidden Neurons with Variant Experiments in Multilayer Perception using Machine Learning Neural Networks
K.Meenakshi Sundaram1, S.Karthigai2

1Dr.K.Meenakshi Sundaram, Associate Professor of Computer Science, Erode Arts and Science College Erode, (Tamil Nadu), India.
2S.Karthigai, Research Scholars in Computer Science, Erode Arts And Science College Erode, (Tamil Nadu), India.

Manuscript received on 30 June 2019 | Revised Manuscript received on 05 July 2019 | Manuscript published on 30 July 2019 | PP: 2725-2729 | Volume-8 Issue-9, July 2019 | Retrieval Number: I8995078919/19©BEIESP | DOI: 10.35940/ijitee.I8995.078919
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Neural network has broadly been employed in various fields for its efficacy and its superiority. Excellence results provided can be directly provided in various analyses. Besides the variant types of neural network, Multi layer perceptron plays a vital role for its adaptive learning ability. The network makes prediction based on learn of training set. Neural has three layers then the layers are the Input, Hidden and the Output Layers. There may be more than one hidden layer but there is one input and output layer. The hidden or the intermediate layer is considered as an engine of the complete network as it has the non linear activation function and they has a sensational domination in the finishing result . The amount of neurons in three layers determines the excellence of the network. The neuron in the input and the output layer is fixed as per the dataset while for the intermediate layer it is fixed by the user in random. Increase in neuron cause over-fitting while decrease cause under fitting and these assumptions have a great impact in the final outcome. This paper discusses the existing approaches for fixing the hidden neurons and proposes a method to fix the neurons in the intermediate layer and analyse the quality of the group. The proposed procedure has variant approaches to determine the hidden neuron and they are compared. The experiment is done in WEKA and the accuracy is checked with measures.
Keywords: Data Mining, Multi Layer Perceptron, Hidden Neurons, WEKA.

Scope of the Article: Deep Learning