# Stanford Machine Learning Lecture 1: Introduction

Before delving into the meaning of Machine Learning it is always helpful to get a feel of it by knowing a few real world examples that use the concept.

A list of few examples of machine learning that we use everyday and are driven by ML.

Product Recommendation: E-commerce (like Amazon) , TV show(like Netflix) websites recommend us which product to buy or which shows to watch on the basis of our history at the platform. The sites may do so by analysing the behaviour of those who have similar online habits on those websites.

Online Fraud Detection: Companies are using ML for protection against money laundering. They compare millions of transactions taking place and differentiate them as legitimate or illegitimate.

### Definition of Machine Learning

Over the time various Computer Scientists have given their definition of ML. The few which stand out are mentioned below.

Field of study that gives computers the ability to learn without being explicitly programmed. – Arthur Samuel (1959)

Samuel famously wrote a program that that learned how to play Checkers better than him. (Link to a paper on the same).

Well posed Learning Problem: A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E. – Tom Mitchell (1998)

According to Mitchell’s definition we can say for the Checkers game program: The task T of the system is to play the game of Checkers. The performance measure P could be the percentage of games won by the system. Here the experience E is the set of total games played.

The class is organised into 4 major sections: Supervised Learning, Unsupervised Learning, Learning Theory and Reinforcement Learning.

### Supervised Learning

In Supervised Learning, the algorithm is given training data which contains the “correct answer” for all examples.

Suppose you are selling your house and you want to know what a good market price would be. One way to do this is to first collect information on recent houses sold and make a model of housing prices.

For instance, a supervised learning algorithm for predicting house prices would take as input a set of parameters affecting it, say, the area of the house. The training data also contain information with regards to the prices of each house.

It is called “Supervised” because we are supervising the learning algorithm.

As you can see above in the plot the housing prices date contains prices details of all the house in the database with respect to their area.

Now if someone was selling their house in the same region as this data is from and want an algorithm to tell them how much should they sell their house at. There a lots of ways to do it.

• Can create a straight line through the data and then with respect to the area of the house in question see the corresponding price.
• Can create a square function(seems to fit better) through the data and do the same.

Housing problem is an example of a Regression problem, regression refers to the fact that the variable we are trying to predict is a continuous value price.

Classification problem: The variable one is trying to predict in this case is discrete rather than continuous.

For example, an email of text can be classified as either spam and not spam

• A classification problem requires that examples be classified into one of two or more classes.
• A classification can have real-valued or discrete input variables.

### Unsupervised Learning

These algorithms infer patterns from a dataset without been labeled, classified or categorized. Speaking loosely, given a dataset you would ask algorithm to find interesting structures in the set.

Clustering would be one example of this type of learning. In this, clusters would be formed around data, which sought of have similar structure.

Unsupervised Learning algorithms are used in variety of problems:

• Image processing (to group similar pixels together)
• Social Network analysis
• Astronautical data
• Market Segmentation
• Cocktail Party effect
• To understand gene data

### Reinforcement Learning

In reinforcement learning, an artificial intelligence faces a game-like situation. The computer employs “trial and error” to solve to the problem. To get the machine to do what it wants, the algorithm is designed in a way that it either rewards or give penalties for the actions. The goal is to maximize the total reward.

Reinforcement Learning can be used in places like robotics, create a chess program.

### Course Pre-requisites

1. Basic Programming and Data Structures knowledge
2. Familiarity with Probability and Statistics
3. Basic understanding of Linear Algebra
4. Familiarity with one programming language, preferably MATLAB/Octave

Stanford Machine Learning (CS229) By Andrew Ng: Complete List of My Notes

## One thought on “Stanford Machine Learning Lecture 1: Introduction”

This site uses Akismet to reduce spam. Learn how your comment data is processed.