Which thing are computers having the ability of doing? That question isaddresses by computational learning theory in a matter of particular importance. An important class of computational tasks which is formed by manylearning problems that discovers many of the computations researchers applyto large real-life data sets. It can be said that computational learning theoryis the ability of the essential capabilities and weakness of algorithms whichgain from data. Recently it is aspect that an capability of a computer’s abilities for learning will give us some hope on similar type of work for humanbeings.

In Valiants landmark paper A theory of the learnable 1 said aboutan explicit goal:The results of learnability theory would then indicate the maximum granularity of the single concepts that can be acquiredwithout programming.”Some composition of sample data, intelligent questioning, and backgroundknowledge which can capable of inferring rules that are scientific and socialprofit to having reliable programs. Though, several types of theories andalgorithms are understandable if there is examples are given. However, ifthere is no specific definition of learnability then it is really tough to understand the span of application of an algorithm and discussion of various typesof techniques and methods. The machine learning is now a days becomesvery popular in computer science which is nothing but a idea of mathematical study.

This area infer a good prediction from some some analysis ofdata which is actually known as inductive inference and statistical patternrecognition. Their are lots of research works 2, 3, 4 and 5 in this area.More than two and half decades there has been a lots of work has beendone in computational learning theory where some of them have successfulPreprint submitted to Journal Name December 11, 2017learning outputs, a large number of them have hard results and also manylearning models. There is a certain relationship between computational learning theory research and research in machine learning which is also played inartificial intelligence research field. However problem to problem there isalso variation with respect to strength and relevance.

Artificial intelligenceis tackled many of the problems but in biological sense it is not understandclearly and complex. The focus is based on basis of samples of target concept what it means to learn a target concept effectively from given examples.From Lisa Hellerstein describe in the paper 6 to design computationally efficient algorithms that can learn Boolean functions f : f0; 1gn ! f1; 1g. Alsoin this paper 6 said that a general framework within which this question isoften addressed is roughly the following:1. There is a fixed class C of possible target functions over 0, 1n whichis a priori known to the learning algorithm. (Such function classes areoften referred to as concept classes, and the functions in such classesare referred to as concepts) 6.

2. The learning algorithm is given some form of access to informationabout the unknown target concept c C 6.3. At the end of its execution, the learning algorithm outputs a hypothesish : f0; 1gn ! f1; 1g, which ideally should be equivalent or close to c6.Several types of scalable learnability has been grown up for several difficult measure for Boolean functions. From different research works 7, 8,9 we know about the Fourier transform of Boolean functions.

L.G. Valiant1 also introduced a model named distribution free model which is used inthe research machine learning model. The learning model considered in thispaper is The Probably Approximately Correct (PAC) learning model is alearning model which is also introduced by L.G.Valiant 1.

After that M.J.Kearns and L.G. Valiant 10 combinedly introduced its limitations. In theoretical machine learning there had been a problem of learning the class ofDNFs with a polynomial number of terms.

The problem was an most openproblem until L.G. valiant discovered the PAC model.There has been some research works 11, 12, 13, 14, 15 on learningDNF. Hence there is not any satisfactory result in the PAC model for learningDNF. There are problems of learning DNF with and without membershipqueries are equivalent which results in specific distributions but not in the2uniform distribution 16. The difficulties of learning AC1 circuits in uniformdistribution are said by a work 17.

There has been a work 18 of therelationship between learnability and fourier spectrum which approximationis as regards with to the uniform distribution. Combinedly learning DNFand decision tree which read one as regards to uniform distribution has apolynomial time algorithm is shown by paper 19. There has been anotherwork 20 who used the Fourier representation for decision trees as regardsto the uniform distribution which is a poly-time algorithm for learning. Thepaper 21 also tell about DNFs and Fourier transform representation.