30 December 2014

Machine Learning

Objectives:
• To be able to formulate machine learning problems corresponding to different applications.
• To understand a range of machine learning algorithms along with their strengths and weaknesses.
• To understand the basic theory underlying machine learning.
• To be able to apply machine learning algorithms to solve problems of moderate complexity.
• To be able to read current research papers and understands the issues raised by current research.

UNIT I
Introduction: Well-posed learning problems, Designing a learning system, Perspectives and issues in machine learning 
Concept learning and the general to specific ordering: Introduction, A concept learning task, Concept learning as search, Find-S: finding a maximally specific hypothesis, Version spaces and the candidate elimination algorithm, Remarks on version spaces and candidate elimination, Inductive bias.

UNIT II
Decision Tree learning: Introduction, Decision tree representation, Appropriate problems for decision tree learning, The basic decision tree learning algorithm, Hypothesis space search in decision tree learning, Inductive bias in decision tree learning, Issues in decision tree learning
Artificial Neural Networks: Introduction, Neural network representation, Appropriate problems for neural network learning, Perceptions, Multilayer networks and the back propagation algorithm, Remarks on the back propagation algorithm, An illustrative example face recognition Advanced topics in artificial neural networks
Evaluation Hypotheses: Motivation, Estimation hypothesis accuracy, Basics of sampling theory, A general approach for deriving confidence intervals, Difference in error of two hypotheses, Comparing learning algorithms

UNIT III
Bayesian learning: Introduction, Bayes theorem, Bayes theorem and concept learning, Maximum likelihood and least squared error hypotheses, Maximum likelihood hypotheses for predicting probabilities, Minimum description length principle, Bayes optimal classifier, Gibs algorithm, Naïve Bayes classifier, An example learning to classify text, Bayesian belief networks The EM algorithm 
Computational learning theory: Introduction, Probability learning an approximately correct hypothesis, Sample complexity for Finite Hypothesis Space, Sample Complexity for infinite Hypothesis Spaces, The mistake bound model of learning- Instance-Based Learning- Introduction, k-Nearest Neighbour Learning, Locally Weighted Regression, Radial Basis Functions, Case-Based Reasoning, Remarks on Lazy and Eager Learning
Genetic Algorithms: Motivation, Genetic Algorithms, An illustrative Example, Hypothesis Space Search, Genetic Programming, Models of Evolution and Learning, Parallelizing Genetic Algorithms

UNIT IV
Learning Sets of Rules: Introduction, Sequential Covering Algorithms, Learning Rule Sets:Summary, Learning First Order Rules, Learning Sets of First Order Rules: FOIL, Induction as Inverted Deduction, Inverting Resolution
Analytical Learning: Introduction, Learning with Perfect Domain Theories: Prolog-EBG Remarks on Explanation-Based Learning, Explanation-Based Learning of Search Control Knowledge

UNIT V
Combining Inductive and Analytical Learning: Motivation, Inductive-Analytical Approaches to Learning, Using Prior Knowledge to Initialize the Hypothesis, Using Prior Knowledge to Alter the Search Objective, Using Prior Knowledge to Augment Search Operators
Reinforcement Learning: Introduction, The Learning Task, Q Learning, Non-Deterministic, Rewards and Actions, Temporal Difference Learning, Generalizing from Examples, Relationship to Dynamic Programming

TEXT BOOKS:
1. Machine Learning: Tom M. Mitchell, McGraw Hill
2. Machine Learning: An Algorithmic Perspective, Stephen Marsland, Taylor&Francis(CRC)

REFERENCE BOOKS:
1. Machine Learning Methods in the Environmental Sciences, Neural Networks, William W Hsieh, Cambridge Univ Press.
2. Richard O. Duda, Peter E. Hart and David G. Stork, Pattern Classification, John Wiley&Sons Inc., 2001
3. Chris Bishop, Neural Networks for Pattern Recognition, Oxford University Press, 1995


.

0 comments:

Post a Comment

Thanks for that comment!