Industry recognized certification enables you to add this credential to your resume upon completion of all courses

Need Custom Training for Your Team?
Get Quote
Call Us

Toll Free (844) 397-3739

Inquire About This Course
Dr. Rukmini Vijaykumar, Instructor - Supervised Learning: Classification

Dr. Rukmini Vijaykumar

Is an Independent Consultant – Data Science, Machine Learning. She has a Ph.D in Computer Science with specialization in AI from University of Massachusetts, Amherst; with over 30 Years of professional experience in computing spanning academia, applied research, and industry.

Instructor: Dr. Rukmini Vijaykumar

Classification Methods, Algorithms and Tools

  • Learn several classification models that are widely in use.
  • Gain the knowledge and skills to effectively apply existing classification algorithms and tools to solve real-world problems.
  • Instructor has a Ph.D in Computer Science with specialization in AI from University of Massachusetts, Amherst with over 30 years of professional experience.

Course Description

In this course you will be introduced to the classification problem and a number of the approaches used to solve the problem. Each approach is presented with the underlying intuition as well as the necessary mathematical underpinnings. We discuss the learning algorithms and illustrate the python tools available using examples. You will learn the relative merits and demerits of each approach. The focus of the course is on learning to find the right model for the problem at hand using the available tools and experimentation. Throughout the course, exercises are provided to reinforce ideas.

What am I going to get from this course?

Learn several classification models that are widely in use.
Gain the knowledge and skills to effectively apply existing classification algorithms and tools to solve real-world problems.
Evaluate multiple models and select the most appropriate for the task at hand.

Prerequisites and Target Audience

What will students need to know or do before starting this course?

Students will benefit from prior exposure to probability and statistics, basic algebra and calculus.  Familiarity with the Python programming language is required. Students should be able to use Python 3.x and Python Notebooks.

Who should take this course? Who should not?

Industry professionals and college students who are interested in learning about the available algorithms and tools to address machine learning problems in general, and specifically, the classification problem. 


Module 1:

Lecture 1 Introduction & Overview

In this lecture we will look at what machine learning is, and look at some examples of its application in our everyday life.

Lecture 2 What is Supervised Learning

In this lecture, we will define what the classification problem is within Machine Learning, see some examples and briefly outline the models we will be looking at.

Lecture 3 Measuring Classifier Performance

In this lecture, we review the metrics that are commonly used for measuring model performance, and provide pointers to additional resources to complement the course material.

Module 2:

Lecture 4 Data Sets & Visualization

In this lecture, we will get familiar with the programming language and environment we will be using, as well as some of the data sets we will use to illustrate the classification models we will be exploring in this course.

Quiz 1 Seeds Data Exploration

Observe the relationships between the features in the Seeds data and identify two features that you consider most likely to provide good separation between the classes.

Module 3:

Lecture 5 K Nearest Neighbors

In this lecture, we will explore the K Nearest Neighbor classifier. We look at the underlying intuition, and the parameters needed to obtain the model. We present the classification algorithm and illustrate this approach using the Iris data set with different parameters and observe the results. We will then review the pros and cons of using this approach.

Quiz 2 KNN - Creating Test Data and Measuring Test Accuracy

Create test data by using a portion of the Iris data set for testing. Use the remaining data for training. Measure test accuracy for the models created in class and compare them to the training accuracy.

Module 4:

Lecture 6 Decision Trees for Classification

In this lecture we explore another classification model – the Decision Tree. Decision trees are used for regression as well as classification. You will learn the basic characteristics of decision trees and how they may be applied to classification problems.

Quiz 3

Module 5:

Lecture 7 Parametric Models: Bayes' Theorem & Naive Bayes Classifier

We now move to exploring a number of parametric models for classification. First we will review Bayes’ theorem and how it is applied in arriving at the conditional probabilities we are attempting to estimate for our classifiers. We will then look at the Naive Bayes classifier, reviewing the underlying assumptions, the model, its application to the Iris dataset.

Lecture 8 Parametric Models: Linear & Quadratic Discriminant Analysis

In this lecture, we will look at two parametric models - Linear Discriminant Analysis and Quadratic Discriminant Analysis. We will look at how these models differ from Naive Bayes by reviewing their underlying assumptions. We will then apply these models to a sample dataset and review their pros and cons.

Module 6:

Lecture 9 Logistic Regression

In this section we will explore Logistic Regression, another probabilistic classification model that is widely applied to binary classification problems. We will learn about the Logistic Function and the role it plays in the Logistic Regression Model. We will examine what is involved in learning the Logistic Regression Model from training data and apply it to a sample dataset.

Module 7:

Lecture 10 Support Vector Machines

We will look at three closely related classifier models – the maximum margin classifier, the support vector classifier, and support vector machines. We will examine the underlying concepts and how the classifier models are learnt from training data. We discuss basis expansion and Kernel functions and how they are used to extend the model to handle non-linear decision boundaries.

Module 8:

Lecture 11 Classification Summary

In this final lecture, we will look at what it means to arrive at the best model for the problem at hand. Specifically, we will review the concepts of model capacity, generalization, capacity control, overfitting, underfitting, bias, variance, and the bias-variance tradeoff. We then examine hyperparameters and regularization, and their role in capacity control.