Handwritten Digit Recognitionusing Multilayer Neural NetworkSneha Reddy, Tanishka VeguntaDepartmentof Information Technology, Chaitanya Bharathi Institute of Technology,Hyderabad, India AbstractRecognitionof Handwriting by humans may seem as a very easy task but when done by amachine, it is a very complex one.

It is unproductive for humans to spend a lotof time trying to recognize characters in order to analyze any collected data.Our main focus should be on analyzing the data rather than trying to recognizethe characters. Apart from this, the manual recognition of characters may notyield the right results since it may vary from person to person. Hence, it isnot accurate to a great extent and may take a lot of time and energy.

Algorithms using neural networks have made this task a lot easier and moreaccurate. Therefore, neural networks have been utilized with an aim todetermine the characters by training a neural network. In this paper, wediscuss the recognition of handwritten digits taken from the MNIST data set andcheck the accuracy of our implementation. This is done by training a neuralnetwork using stochastic gradient descent and backpropagation. KeywordsDigit recognition, Backpropagation, Mini batchStochastic Gradient INTRODUCTIONHandwritingis a form of writing peculiar to a person with variations in size, shape ofletters, spacing between letters. There are different styles of handwritingincluding cursive, block letters, calligraphy, signature etc.

This makes thetask of recognizing handwritten characters complex when using traditional rulebased programming. The task becomes more natural when it is approached from amachine learning perspective by using neural networks. According to TomMitchell “A computer program is said to learn from experience E withrespect to some class of tasks T and performance measure P, if its performanceat tasks in T, as measured by P, improves with experience E.”1Aneural network consists of neurons which are simple processing units and thereare directed, weighted connections between these neurons.For aneuron j, propagation function receives the outputs of other neurons andtransforms them in consideration of the weights into the network input that canbe further processed by the activation function. 2Minibatch gradient descent used in the paper is a combination of batch gradientdescent and stochastic gradient descent algorithms. It calculates model errorrate by splitting data set into small batches.Thebackpropagation algorithm used in this paper is used for adjusting the weightsin the neural network.

Best services for writing your paper according to Trustpilot

Premium Partner
From $18.00 per page
4,8 / 5
4,80
Writers Experience
4,80
Delivery
4,90
Support
4,70
Price
Recommended Service
From $13.90 per page
4,6 / 5
4,70
Writers Experience
4,70
Delivery
4,60
Support
4,60
Price
From $20.00 per page
4,5 / 5
4,80
Writers Experience
4,50
Delivery
4,40
Support
4,10
Price
* All Partners were chosen among 50+ writing services by our Customer Satisfaction Team

The algorithm works by comparing the actual output andthe desired output for a given input and calculates error value. The weightsare adjusted based on the error value. The error is first calculated at theoutput layer and then distributed for the other layers. MATERIALS AND METHODSDigit recognition is done by training amulti-layer feedforward neural network by using mini batch stochastic gradientdescent and backpropagation algorithm.The MNIST data set obtained from 3 contains amodified version of the original training set of 60,000 images. The originaltraining set is split into a training set with 50,000 examples and a validationset with 10,000 examples. This set is then used to train the neural network. Each image isrepresented as numpy 1-dimensional array of 784 float values between 0 and 1.

The labels are numbers between 0 and 9 indicating which digit the imagerepresents. 3mnist data set examplehereAnartificial neural network with sigmoid neurons is implemented. Therefore, the outputof each neuron is calculated using the sigmoid function.

The output of each neuron is given as. Where, wis the weight, b is the bias and x is the input.Initially, the weights and biases of the neuralnetwork are initialized randomly using Gaussian distribution.

They are lateradjusted by applying mini batch stochastic gradient descent and backpropagation.The training data is split into a number of minibatches. In each epoch, the training data is shuffled and split into minibatches of a fixed size and gradient descent is applied. The neural network istrained for a number of epochs. The labels generated for the training data ineach epoch are compared to the actual labels and cost function is calculated. Thegradient of the cost function is calculated by using the backpropagationalgorithm.

This calculated gradient is then used to update the weights andbiases of the neural network. Starting from the output layer and movingbackwards, the biases and weights between connections are adjusted.  The digits are labelled based on which neuron hasthe highest activation out of the output layer neurons.After training the network during each epoch,the trained network is tested using the 10,000 test images. The labelsgenerated by the neural network are compared to the class labels given in theMNIST test data. The number of correctly generated labels is identified.

 RESULTS AND DISCUSSIONFigure2 ResultsThe above results are obtained when the numberof epochs is set to 30, the mini batch size is 10 and the learning rate is 3.0.The accuracy is calculated by identifying the number of correctly identifiedimages out of the 10,000 test images in the MNIST data set. The given resultsare taken as the best out of five trials.The accuracy peaks at 95.00 % at the 28thepoch. The accuracy increases rapidly in the beginning with each successiveepoch.

The accuracy becomes steady after a certain point and it continues withapproximately the same accuracy.  CONCLUSIONNeural networks are an effective technique foridentification of handwritten digits. The accuracy of a neural network inhandwriting recognition is quite high and they can still achieve higheraccuracy by optimizing certain parameters. In the current implementation usingmini batch stochastic gradient descent and backpropagation, an accuracy of 95%was obtained in one of the trial runs.  ACKNOWLEDGEMENTThanks to our project guide Ms K. Sugamya, CBIT. REFERENCES1 Machine Learning: Hands-On for Developers and Technical Professionals2 A BriefIntroduction to Neural Networks : David Kriesel3http://www.deeplearning.net/tutorial/gettingstarted.html