Machine Learning course provides a practical and theoretical foundation in supervised, unsupervised, and evolutionary learning methods, equipping students with the skills to tackle real-world machine learning applications. From Decision Trees, Neural Networks, and Support Vector Machines to Genetic Algorithms, learners will explore powerful models and algorithms widely used in data science and AI industries.

Course Objectives

  • To be able to identify machine learning problems corresponding to different applications.
  • To understand various machine learning algorithms along with their strengths and weaknesses.
  • To understand the basic theory underlying machine learning.
  • To introduce Decision Tree learning, Instance-Based Learning techniques.

Course Syllabus

UNIT 1

Introduction: Well-posed learning problems, perspectives and issues in machine learning

Designing a learning system

Types of learning.

Concept Learning:
Concept learning task, concept learning as search through a hypothesis space, finding maximally specific hypotheses, version spaces and the candidate elimination algorithm, inductive bias.

UNIT 2

Decision Tree Learning:
Decision tree representation and learning algorithm, appropriate problems for decision tree learning, hypothesis space search in decision tree learning, inductive bias in decision tree learning: Occam’s razor, issues in decision tree learning.

Artificial Neural Networks:
Introduction, the neuron model, activation functions, neural network architecture: single-layer feed-forward networks, multi-layer feed-forward networks.

UNIT 3

Support Vector Machines:
Introduction, linear classifier, non-linear classifier, training SVM, support vector regression.

Bayesian Learning:
Bayes theorem and concept learning, minimum description length principle, Bayes optimal classifier, Gibbs algorithm, naive Bayes classifier, the EM algorithm.

UNIT 4

Computational Learning Theory:
PAC hypothesis, sample complexity for finite and infinite hypothesis spaces, mistake bound model.

Instance-Based Techniques:
K-nearest neighbor learning, locally weighted regression, radial basis function, Case-Based Reasoning, Remarks on Lazy vs Eager Learning.

UNIT 5

Genetic Algorithm:
Biological motivation, representing hypothesis, genetic operators, fitness function and selection, hypothesis space search, genetic programming, models of evolution and learning, parallelizing genetic algorithms.

Text Books

Tom M. Mitchell, Machine Learning, McGraw Hill Education, Edition 2013.

Reference Books

  • Saroj Kaushik, Artificial Intelligence, CENGAGE Learning, 2011.
  • Trevor Hastie, Robert Tibshirani, and Jerome Friedman, The Elements of Statistical Learning, 2nd Edition, Springer Series in Statistics, 2001.
  • William W. Hsieh, Machine Learning Methods in the Environmental Sciences: Neural Networks and Kernels, Cambridge University Press.
  • Stephen Marsland, Machine Learning – An Algorithmic Perspective, CRC Press, 2009.

Online Resources

  • http://www.cs.cmu.edu/~tom/
  • http://www.holehouse.org/mlclass/

Course Outcomes

After completion of the course, students will be able to:

  • Gain knowledge on the basic theory in machine learning.
  • Understand machine learning problems corresponding to different applications.
  • Create solutions with Decision Trees and Bayesian Classifiers for various business problems.
  • Implement instance-based learning and analytic learning for suitable applications.
  • Apply Genetic Algorithms and Reinforcement Learning on real-world applications.
  • Design applications using real datasets and evaluate the performance of different algorithms.