• LOGIN
    • No products in the basket.
88 STUDENTS ENROLLED
  • Profile photo of Dipanwita Dey
  • Profile photo of Veena Chandrasekaran
  • Profile photo of satpal bhandari
  • Profile photo of Aditya
  • Profile photo of Prakhar
  • Profile photo of KUMAR

Course Curriculum

Section 01: Introduction
What this course is about 00:02:28
We - the course instructors - start with introductions. We are a team that has studied at Stanford, IIT Madras, IIM Ahmedabad and spent several years working in top tech companies, including Google and Flipkart.Next, we talk about the target audience for this course: Analytics professionals, modelers and big data professionals certainly, but also Engineers, Product managers, Tech Executives and Investors, or anyone who has some curiosity about machine learning.If Machine Learning is a car, this class will teach you how to drive. By the end of this class, students will be able to: spot situations where machine learning can be used, and deploy the appropriate solutions. Product managers and executives will learn enough of the 'how' to be able intelligently converse with their data science counterparts, without being constrained by it.This course is practical as well : There are hundreds of lines of source code with comments that can be used directly to implement natural language processing and machine learning for text summarization, text classification in Python.
Downloadable Files | Codes and Supplements 00:00:00
Section 02: Jump right in : Machine learning for Spam detection
Why should you jump on the bandwagon?FREE 00:16:31
Machine learning is quite the buzzword these days. While it's been around for a long time, today its applications are wide and far-reaching - from computer science to social science, quant trading and even genetics. From the outside, it seems like a very abstract science that is heavy on the math and tough to visualize. But it is not at all rocket science. Machine learning is like any other science - if you approach it from first principles and visualize what is happening, you will find that it is not that hard. So, let's get right into it, we will take an example and see what Machine learning is and why it is so useful.
Plunging In – Machine Learning Approaches to Spam Detection 00:17:01
Machine learning usually involves a lot of terms that sound really obscure. We'll see a real life implementation of a machine learning algorithm (Naive Bayes) and by end of it you should be able to speak some of the language of ML with confidence.
Spam Detection with Machine Learning Continued 00:17:04
We have gotten our feet wet and seen the implementation of one ML solution to spam detection - let's venture a little further and see some other ways to solve the same problem. We'll see how K-Nearest Neighbors and Support Vector machines can be used to solve spam detection.
Get the Lay of the Land : Types of Machine Learning Problems 00:17:26
So far we have been slowly getting comfortable with machine learning - we took one example and saw a few different approaches. That was just the the tip of the iceberg - this class is an aerial maneuver, we will scout ahead and see what are the different classes of problems that Machine Learning can solve and that we will cover in this class.
Section 03: Naive Bayes Classifier
Random Variables 00:20:10
Many popular machine learning techniques are probabilistic in nature and having some working knowledge helps. We'll cover random variables, probability distributions and the normal distribution.
Bayes Theorem 00:18:36
We have been learning some fundamentals that will help us with probabilistic concepts in Machine Learning. In this class, we will learn about conditional probability and Bayes theorem which is the foundation of many ML techniques.
Naive Bayes ClassifierFREE 00:08:49
Naive Bayes Classifier is a probabilistic classifier. We have built the foundation to understand what goes on under the hood - let's understand how the Naive Bayes classifier uses the Bayes theorem.
Naive Bayes Classifier : An example 00:14:03
We will see how the Naive Bayes classifier can be used with an example.
Section 04: K-Nearest Neighbors
K-Nearest Neighbours 00:13:09
Let's understand the k-Nearest Neighbors setup with a visual representation of how the algorithm works.
K-Nearest Neighbours : A few wrinkles 00:14:47
There are few wrinkles in k-Nearest Neighbors. These are just the things to keep in mind if and when you decide to implement it.
Section 05: Support Vector Machines
Support Vector Machines IntroducedFREE 00:08:16
We have been talking about different classifier algorithms. We'll learn about Support Vector Machines which are linear classifiers.
Support Vector Machines : Maximum Margin Hyperplane and Kernel Trick 00:16:23
Support Vector Machines algorithm can be framed as an optimization problem. The kernel trick can be used along with SVM to perform non-linear classification.
Section 06: Clustering as a form of Unsupervised learning
Clustering : Introduction 00:19:07
Clustering helps us understand what are the patterns in a large set of data that we don't know much about. It is a form of unsupervised learning.
Clustering : K-Means and DBSCAN 00:13:42
K-Means and DBSCAN are 2 very popular clustering algorithms. How do they work and what are the key considerations?
Section 07: Association Detection
Association Rules Learning 00:09:12
It is all about finding relationships in the data - sometimes there are relationships that you would not intuitively expect to find. It is pretty powerful - so let's take a peek at what it does.
Section 08: Dimensionality Reduction
Dimensionality Reduction 00:10:22
Data that you are working can be noisy or garbled or difficult to make sense of. It can be so complicated that its difficult to process efficiently. Dimensionality reduction to the rescue - it cleans up the noise and shows you a clear picture. Getting rid of unnecessary features makes the computation simpler.
Principal Component Analysis 00:18:53
PCA is one of the most famous Dimensionality Reduction techniques. When you have data with a lot of variables and confusing interactions, PCA clears the air and finds the underlying causes.
Section 9: Artificial Neural Networks
Artificial Neural Networks:Perceptrons Introduced 00:11:18
Artificial Neural Networks are much misunderstood because of the name. We will see the Perceptron (a prototypical example of ANNs) and how it is analogous to Support Vector Machine
Section 10: Regression as a form of supervised learning
Regression Introduced : Linear and Logistic Regression 00:13:54
Regression can be used to predict the value of a variable, given some predictor variables. We'll see an example to understand its use and cover two popular methods : Linear and Logistic regression.
Bias Variance Trade-off 00:10:13
In this class, we will talk about some trade-offs which we have to be aware of when we choose our training data and model.
Section 11: Natural Language Processing and Python
Installing Python – Anaconda and Pip 00:09:00
Anaconda's iPython is a Python IDE. The best part about it is the ease with which one can install packages in iPython - 1 line is virtually always enough. Just say '!pip'
Natural Language Processing with NLTKFREE 00:07:26
Natural Language Processing is a serious application for all the Machine Learning techniques we have been using. Let's get our feet wet by understanding a few of the common NLP problems and tasks. We'll get familiar with NLTK - an awesome Python toolkit for NLP.
Natural Language Processing with NLTK – See it in action 00:14:14
We'll continue exploring NLTK and all the cool functionality it brings out of the box - tokenization, Parts-of-Speech tagging, stemming, stopwords removal etc.
Web Scraping with BeautifulSoup 00:18:09
Web Scraping is an integral part of NLP - its how you prepare the text data that you will actually process. Web Scraping can be a headache - but Beautiful Soup makes it elegant and intuitive.
A Serious NLP Application : Text Auto Summarization using Python 00:11:34
Auto-summarize newspaper articles from a website (Washington Post). We'll use NLP techniques to remove stopwords, tokenize text and sentences and compute term frequencies. The Python source code (with many comments) is attached as a resource.
Python Drill : Autosummarize News Articles I 00:18:33
Code along with us in Python - we'll use NLTK to compute the frequencies of words in an article.
Python Drill : Autosummarize News Articles II 00:11:28
Code along with us in Python - we'll use NLTK to compute the frequencies of words in an article and the importance of sentences in an article.
Python Drill : Autosummarize News Articles III 00:10:23
Code along with us in Python - we'll use Beautiful Soup to parse an article downloaded from the Washington Post and then summarize it using the class we set up earlier.
Put it to work : News Article Classification using K-Nearest Neighbours 00:19:29
Classify newspaper articles into tech and non-tech. We'll see how to scrape websites to build a corpus of articles. Use NLP techniques to do feature extraction and selection. Finally, apply the K-Nearest Neighbours algorithm to classify a test instance as Tech/NonTech. The Python source code (with many comments) is attached as a resource.
Put it to work : News Article Classification using Naive Bayes Classifier 00:19:24
Classify newspaper articles into tech and non-tech. We'll see how to scrape websites to build a corpus of articles. Use NLP techniques to do feature extraction and selection. Finally, apply the Naive Bayes Classification algorithm to classify a test instance as Tech/NonTech. The Python source code (with many comments) is attached as a resource.
Python Drill : Scraping News Websites 00:15:45
Code along with us in Python - we'll use BeautifulSoup to build a corpus of news articles.
Python Drill : Feature Extraction with NLTK 00:18:51
Code along with us in Python - we'll use NLTK to extract features from articles.
Python Drill : Classification with KNN 00:04:15
Code along with us in Python - we'll use KNN algorithm to classify articles into Tech/NonTech.
Python Drill : Classification with Naive Bayes 00:08:08
Code along with us in Python - we'll use a Naive Bayes Classifier to classify articles into Tech/Non-Tech.
Document Distance using TF-IDF 00:11:03
See how search engines compute the similarity between documents. We'll represent a document as a vector, weight it with TF-IDF and see how cosine similarity or euclidean distance can be used to compute the distance between two documents.
Put it to work : News Article Clustering with K-Means and TF-IDF 00:14:32
Create clusters of similar articles within a large corpus of articles. We'll scrape a blog to download all the blog posts, use TF-IDF to represent them as vectors. Finally, we'll perform K-Means clustering to identify 5 clusters of articles. The Python source code (with many comments) is attached as a resource.
Python Drill : Clustering with K Means 00:08:32
Code along with us in Python - We'll cluster articles downloaded from a blog using the KMeans algorithm.
Section 12: Sentiment Analysis
A Sneak Peek at what’s coming upFREE 00:02:36
Lots of new stuff coming up in the next few classes. Sentiment Analysis (or) Opinion Mining is a field of NLP that deals with extracting subjective information (positive/negative, like/dislike, emotions). Learn why it's useful and how to approach the problem. There are Rule-Based and ML-Based approaches. The details are really important - training data and feature extraction are critical. Sentiment Lexicons provide us with lists of words in different sentiment categories that we can use for building our feature set. All this is in the run up to a serious project to perform Twitter Sentiment Analysis. We'll spend some time on Regular Expressions which are pretty handy to know as we'll see in our code-along.
Sentiment Analysis – What’s all the fuss about? 00:17:17
As people spend more and more time on the internet, and the influence of social media explodes, knowing what your customers are saying about you online, becomes crucial. Sentiment Analysis comes in handy here - This is an NLP problem that can be approached in multiple ways. We examine a couple of rule based approaches, one of which has become standard fare (VADER).
ML Solutions for Sentiment Analysis – the devil is in the details 00:19:57
SVM and Naive Bayes are popular ML approaches to Sentiment Analysis. But the devil really is in the details. What do you use for training data? What features should you use? Getting these right is critical.
Sentiment Lexicons ( with an introduction to WordNet and SentiWordNet) 00:18:49
Sentiment Lexicon's are a great help in solving problems where the subjectivity/emotion expressed by a word are important. SentiWordNet is different even among the popular sentiment lexicons (General Inquirer, LIWC, MPQA etc) all of which are touched upon.
Regular Expressions 00:17:53
Regular expressions are a handy tool to have when you deal with text processing. They are a bit arcane, but pretty useful in the right situation. Understanding the operators from basics help you build up to constructing complex regexps.
Regular Expressions in Python 00:05:41
RE is the module in python to deal with regular expressions. It has functions to find a pattern, substitute a pattern etc within a string.
Put it to work : Twitter Sentiment Analysis 00:17:48
A serious project - Accept a search term from a user and output the prevailing sentiment on Twitter for that search term. We'll use the Twitter API, Sentiwordnet, SVM, NLTK, Regular Expressions - really work that coding muscle :)
Twitter Sentiment Analysis – Work the API 00:20:00
We'll accept a search term from a user and download a 100 tweets with that term. You'll need a corpus to train a classifier which can classifiy these tweets. The corpus has only tweet_ids, so connect to Twitter API and fetch the text for the tweets.
Twitter Sentiment Analysis – Regular Expressions for Preprocessing 00:12:24
The tweets that we downloaded have a lot of garbage, clean it up using regular expressions and NLTK and get a nice list of words to represent each tweet.
Twitter Sentiment Analysis – Naive Bayes, SVM and Sentiwordnet 00:19:40
We'll train 2 different classifiers on our training data , Naive Bayes and SVM. The SVM will use Sentiwordnet to assign weights to the elements of the feature vector.
Section 13: Decision Trees
Planting the seed – What are Decision Trees? 00:17:00
What are Decision Trees and how are they useful? Decision Trees are a visual and intuitive way of predicting what the outcome will be given some inputs. They assign an order of importance to the input variables that helps you see clearly what really influences your outcome.
Growing the Tree – Decision Tree Learning 00:18:03
Recursive Partitioning is the most common strategy for growing Decision Trees from a training set.Learn what makes one attribute be higher up in a Decision Tree compared to others.
Branching out – Information Gain 00:18:51
We'll take a small detour into Information Theory to understand the concept of Information Gain. This concept forms the basis of how popular Decision Tree Learning algorithms work.
Decision Tree AlgorithmsFREE 00:07:50
ID3, C4.5, CART and CHAID are commonly used Decision Tree Learning algorithms. Learn what makes them different from each other. Pruning is a mechanism to avoid one of the risks inherent with Decision Trees ie overfitting.
Titanic : Decision Trees predict Survival (Kaggle) – I 00:19:21
Build a decision tree to predict the survival of a passenger on the Titanic. This is a challenge posed by Kaggle (a competitive online data science community). We'll start off by exploring the data and transforming the data into feature vectors that can be fed to a Decision Tree Classifier.
Titanic : Decision Trees predict Survival (Kaggle) – II 00:14:16
We continue with the Kaggle challenge. Let's feed the training set to a Decision Tree Classifier and then parse the results.
Titanic : Decision Trees predict Survival (Kaggle) – III 00:13:00
We'll use our Decision Tree Classifier to predict the results on Kaggle's test data set. Submit the results to Kaggle and see where you stand!
Section 14: A Few Useful Things to Know About Overfitting
Overfitting – the bane of Machine Learning 00:19:03
Overfitting is one of the biggest problems with Machine Learning - it's a trap that's easy to fall into and important to be aware of.
Overfitting Continued 00:11:19
Overfitting is a difficult problem to solve - there is no way to avoid it completely, by correcting for it, we fall into the opposite error of underfitting.
Cross Validation 00:18:55
Cross Validation is a popular way to choose between models. There are a few different variants - K-Fold Cross validation is the most well known.
Simplicity is a virtue – Regularization 00:07:18
Overfitting occurs when the model becomes too complex. Regularization helps maintain the balance between accuracy and complexity of the model.
The Wisdom of Crowds – Ensemble Learning 00:16:39
The crowd is indeed wiser than the individual - at least with ensemble learning. The Netflix competition showed that ensemble learning helps achieve tremendous improvements in accuracy - many learners perform better than just 1.
Ensemble Learning continued – Bagging, Boosting and Stacking 00:18:02
Bagging, Boosting and Stacking are different techniques to help build an ensemble that rocks!
Section 15: Random Forests
Random Forests – Much more than trees 00:12:28
Decision trees are cool but painstaking to build - because they really tend to overfit. Random Forests to the rescue! Use an ensemble of decision trees - all the benefits of decision trees, few of the pains!
Back on the Titanic – Cross Validation and Random Forests 00:20:03
Machine learning is not a one-shot process. You'll need to iterate, test multiple models to see what works better. Let's use cross validation to compare the accuracy of different models - Decision trees vs Random Forests.
Section 16: Recommendation Systems
What do Amazon and Netflix have in common? 00:16:43
Recommendations - good quality, personalized recommendations - are the holy grail for many online stores. What is the driving force behind this quest?
Recommendation Engines – A look inside 00:10:45
Recommendation Engines perform a variety of tasks - but the most important one is to find products that are most relevant to the user. Content based filtering, collaborative filtering and Association rules are common approaches to do so.
What are you made of? – Content-Based Filtering 00:13:35
Content based filtering finds products relevant to a user - based on the content of the product (attributes, description, words etc).
With a little help from friends – Collaborative Filtering 00:10:26
Collaborative Filtering is a general term for an idea that users can help each other find what products they like. Today this is by far the most popular approach to Recommendations
A Neighbourhood Model for Collaborative Filtering 00:17:51
Neighbourhood models - also known as Memory based approaches - rely on finding users similar to the active user. Similarity can be measured in many ways - Euclidean Distance, Pearson Correlation and Cosine similarity being a few popular ones.
Top Picks for You! – Recommendations with Neighbourhood Models 00:09:41
We continue with Neighbourhood models and see how to predict the rating of a user for a new product. Use this to find the top picks for a user.
Discover the Underlying Truth – Latent Factor Collaborative Filtering 00:20:13
Latent factor methods identify hidden factors that influence users from user history. Matrix Factorization is used to find these factors. This method was first used and then popularized for recommendations by the Netflix Prize winners. Many modern recommendation systems including Netflix, use some form of matrix factorization.
Latent Factor Collaborative Filtering contd. 00:12:09
Matrix Factorization for Recommendations can be expressed as an optimization problem. Stochastic Gradient Descent or Alternating least squares can then be used to solve that problem.
Gray Sheep and Shillings – Challenges with Collaborative Filtering 00:08:12
Gray Sheep, Synonymy, Data Sparsity, Shilling Attacks etc are a few challenges that people face with Collaborative Filtering.
The Apriori Algorithm for Association Rules 00:18:31
Association rules help you find recommendations for products that might complement the user's choices. The seminal paper on association rules introduced an efficient technique for finding these rules - The Apriori Algorithm.
Section 17: Recommendation Systems in Python
Back to Basics : Numpy in Python 00:18:05
Numpy arrays are pretty cool for performing mathematical computations on your data.
Back to Basics : Numpy and Scipy in Python 00:14:19
We continue with a basic tutorial on Numpy and Scipy
Movielens and Pandas 00:16:45
Movielens is a famous dataset with movie ratings. Use Pandas to read and play around with the data.
Code Along – What’s my favorite movie? – Data Analysis with Pandas 00:06:18
We continue playing with Movielens data - lets find the top n rated movies for a user.
Code Along – Movie Recommendation with Nearest Neighbour CF 00:18:10
Let's find some recommendations now. We'll use neighbour based collaborative filtering to find the users most similar to a user and then predict their rating for a movie.
Code Along – Top Movie Picks (Nearest Neighbour CF) 00:06:16
We've predicted the user's rating for all movies. Let's pick the top recommendations for a user.
Code Along – Movie Recommendations with Matrix Factorization 00:17:55
Matrix Factorization was first used for recommendations during the Netflix challenge. Let's implement this on the Movielens data and find some recommendations!
Code Along – Association Rules with the Apriori Algorithm 00:09:50
The Apriori algorithm was introduced in a seminal paper that described how to mine large datasets for association rules efficiently. Let's work through the algorithm in Python.
Section 18: A Taste of Deep Learning and Computer Vision
Computer Vision – An Introduction 00:18:08
A quick intro to Computer Vision, and one of the most popular starter problems - identifying handwritten digits using the MNIST database. We also talk about feature extraction from images.
Perceptron Revisited 00:16:00
Deep Learning Networks are the cutting edge solution for the handwritten digit recognition problem and many others in computer vision. These are often large artificial neural networks. The perceptron is the simplest of artificial neural networks - it becomes a building block for other complex networks.
Deep Learning Networks Introduced 00:17:01
Multilayer perceptrons build upon the idea of a perceptron. These have layers of perceptrons that process the input and feed them forward to other layers.
Code Along – Handwritten Digit Recognition -I 00:14:29
Train a neural network to classify handwritten digits in Python. First start by downloading and unzipping the MNIST database images to create some training and test datasets.
Code Along – Handwritten Digit Recognition -II 00:17:35
Continuing on with the handwritten digit recognition problem, we build a neural network and specify the training process.
Code Along – Handwritten Digit Recognition -III 00:06:01
We have a trained neural network, feed it some test data and check the accuracy.
© 2016. LearnBox Education.

Login

Register

Create an Account
Create an Account Back to login/register