Introduction to Machine Learning

Watch Introduction to Machine Learning

  • 2020
  • 1 Season

Introduction to Machine Learning from The Great Courses Signature Collection is a comprehensive 24-episode series that delves into the fascinating world of machine learning. The show features Michael Littman, a renowned expert in machine learning and computer science, who shares his expertise and insights into the subject matter.

The show aims to provide a beginner's guide to the field of machine learning, which is transforming the way we live, work and think. The series covers a range of topics, including supervised learning, unsupervised learning, deep learning, neural networks, computer vision, natural language processing, and more.

In the first few episodes of the series, Littman lays down the groundwork for the subject matter, explaining the basic concepts and terminology in a clear and concise manner. He provides examples of how machine learning is used in everyday life, from email spam filtering to predictive analytics in healthcare.

As the series progresses, the episodes become more technical, diving deeper into the intricacies of the algorithms used in machine learning. Littman provides step-by-step guides on how to build and train machine learning models, explaining the principles behind the algorithms used.

The show also highlights some of the key challenges and ethical considerations in machine learning. Littman explains how biases can be introduced into machine learning models and provides examples of how this could lead to unintended consequences. He encourages viewers to think critically about the impact of machine learning on society and to consider the ethical implications of the technology.

The visual aids used in the show are of high quality and add an extra dimension to the explanations provided. Littman uses animations, graphs, and diagrams to illustrate complex concepts, making it easier to understand for beginners. The show also includes interviews with experts in the field, providing additional insights into the various applications and challenges of machine learning.

One of the strengths of the series is its emphasis on hands-on learning. Littman provides practical exercises to help viewers apply the concepts they have learned in a real-world context. These exercises are designed to be accessible to beginners while still challenging enough to provide a meaningful learning experience.

Overall, Introduction to Machine Learning from The Great Courses Signature Collection is an excellent resource for anyone interested in learning about this rapidly growing field. Whether you are a seasoned computer scientist or a complete beginner, the show provides a comprehensive and engaging introduction to the subject matter. Michael Littman's clear and enthusiastic delivery makes the show accessible and engaging, while the high-quality visual aids and practical exercises make it an effective learning tool. Highly recommended for anyone looking to explore the exciting world of machine learning.

Introduction to Machine Learning is a series that ran for 1 seasons (25 episodes) between November 6, 2020 and on The Great Courses Signature Collection

Filter by Source

Seasons
Mastering the Machine Learning Process
25. Mastering the Machine Learning Process
November 6, 2020
Finish the series with a lightning tour of meta-learning: algorithms that learn how to learn, making it possible to solve problems that are otherwise unmanageable. Examine two approaches: one that reasons about discrete problems using satisfiability solvers and another that allows programmers to optimize continuous models. Close with a glimpse of the future for this astounding
Protecting Privacy within Machine Learning
24. Protecting Privacy within Machine Learning
November 6, 2020
Machine learning is both a cause and a cure for privacy concerns. Hear about two notorious cases where de-identified data was unmasked. Then, step into the role of a computer security analyst, evaluating different threats, including pattern recognition and compromised medical records. Discover how to think like a digital snoop and evaluate different strategies for thwarting an attack.
The Unexpected Power of Over-Parameterization
23. The Unexpected Power of Over-Parameterization
November 6, 2020
Probe the deep-learning revolution that took place around 2015, conquering worries about overfitting data due to the use of too many parameters. Dr. Littman sets the stage by taking you back to his undergraduate psychology class, taught by one of The Great Courses' original professors. Chart the breakthrough that paved the way for deep networks that can tackle hard, real-world learning problems.
Causal Inference Comes to Machine Learning
22. Causal Inference Comes to Machine Learning
November 6, 2020
Get acquainted with a powerful new tool in machine learning, causal inference, which addresses a key limitation of classical methods: the focus on correlation to the exclusion of causation. Practice with a historic problem of causation: the link between cigarette smoking and cancer, which will always be obscured by confounding factors. Also look at other cases of correlation versus causation.
Inverse Reinforcement Learning from People
21. Inverse Reinforcement Learning from People
November 6, 2020
Are you no good at programming? Machine learning can a give a demonstration, predict what you want, and suggest improvements. For example, inverse reinforcement turns the tables on the following logical relation, if you are a horse and like carrots, go to the carrot. Inverse reinforcement looks at it like this: if you see a horse go to the carrot, it might be because the horse likes carrots.
Deep Learning for Speech Recognition
20. Deep Learning for Speech Recognition
November 6, 2020
Consider the problem of speech recognition and the quest, starting in the 1950s, to program computers for this task. Then, delve into algorithms that machine learning uses to create today's sophisticated speech recognition systems. Get a taste of the technology by training with deep-learning software for recognizing simple words. Finally, look ahead to the prospect of conversing computers.
Making Photorealistic Images with GANs
19. Making Photorealistic Images with GANs
November 6, 2020
A new approach to image generation and discrimination pits both processes against each other in a generative adversarial network, or GAN. The technique can produce a new image based on a reference class, for example making a person look older or younger, or automatically filling in a landscape after a building has been removed. GANs have great potential for creativity and, unfortunately, fraud.
Making Stylistic Images with Deep Networks
18. Making Stylistic Images with Deep Networks
November 6, 2020
One way to think about the creative process is as a two-stage operation, involving an idea generator and a discriminator. Study two approaches to image generation using machine learning. In the first, a target image of a pig serves as the discriminator. In the second, the discriminator is programmed to recognize the general characteristics of a pig, which is more how people recognize objects.
Deep Networks That Output Language
17. Deep Networks That Output Language
November 6, 2020
Continue your study of machine learning and language by seeing how computers not only read text, but how they can also generate it. Explore the current state of machine translation, which rivals the skill of human translators. Also, learn how algorithms handle a game that Professor Littman played with his family, where a given phrase is expanded piecemeal to create a story.
Text Categorization with Words as Vectors
16. Text Categorization with Words as Vectors
November 6, 2020
Previously, you saw how machine learning is used in spam filtering. Dig deeper into problems of language processing, such as how a computer guesses the word you are typing and possibly even badly misspelling. Focus on the concept of word embeddings, which define the meanings of words using vectors in high-dimensional space, a method that involves techniques from linear algebra.
Getting a Deep Learner Back on Track
15. Getting a Deep Learner Back on Track
November 6, 2020
Roll up your sleeves and debug a deep-learning program. The software is a neural net classifier designed to separate pictures of animals and bugs. In this case, fix the bugs in the code to find the bugs in the images! Professor Littman walks you through diagnostic steps relating to the representational space, the loss function, and the optimizer.
Deep Learning for Computer Vision
14. Deep Learning for Computer Vision
November 6, 2020
Discover how the ImageNet challenge helped revive the field of neural networks through a technique called deep learning, which is ideal for tasks such as computer vision. Consider the problem of image recognition and the steps deep learning takes to solve it. Dr. Littman throws out his own challenge: Train a computer to distinguish foot files from cheese graters.
Games with Reinforcement Learning
13. Games with Reinforcement Learning
November 6, 2020
In 1959, computer pioneer Arthur Samuel popularized the term machine learning for his checkers-playing program. Delve into strategies for the board game Othello as you investigate today's sophisticated algorithms for improving play, at least for the machine. Also explore game-playing tactics for chess, Jeopardy!, poker, and Go, which have been a hotbed for machine-learning research.
Recommendations with Three Types of Learning
12. Recommendations with Three Types of Learning
November 6, 2020
Recommender systems are ubiquitous, from book and movie tips to work aids for professionals. But how do they function? Look at three different approaches to this problem, focusing on Professor Littman's dilemma as an expert reviewer for conference paper submissions, numbering in the thousands. Also, probe Netflix's celebrated one-million-dollar prize for an improved recommender algorithm.
Clustering and Semi-Supervised Learning
11. Clustering and Semi-Supervised Learning
November 6, 2020
See how a combination of labeled and unlabeled examples can be exploited in machine learning, specifically by using clustering to learn about the data before making use of the labeled examples.
Pitfalls in Applying Machine Learning
10. Pitfalls in Applying Machine Learning
November 6, 2020
Explore pitfalls that loom when applying machine learning algorithms to real-life problems. For example, see how survival statistics from a boating disaster can lead to false conclusions. Also, look at cases from medical care and law enforcement that reveal hidden biases in the way data is interpreted. Since an algorithm is doing the interpreting, understanding what's happening can be a challenge.
The Fundamental Pitfall of Overfitting
9. The Fundamental Pitfall of Overfitting
November 6, 2020
Having covered the five fundamental classes of machine learning in the previous episodes, now focus on a risk common to all: overfitting. This is the tendency to model training data too well, which can harm the performance on the test data. Practice avoiding this problem using the diabetes dataset from episode 3. Hear tips on telling the difference between real signals and spurious associations.
Nearest Neighbors for Using Similarity
8. Nearest Neighbors for Using Similarity
November 6, 2020
Simple to use and speedy to execute, the nearest neighbor algorithm works on the principle that adjacent elements in a dataset are likely to share similar characteristics. Try out this strategy for determining a comfortable combination of temperature and humidity in a house. Then, dive into the problem of malware detection, seeing how the nearest neighbor rule can sort good software from bad.
Genetic Algorithms for Evolved Rules
7. Genetic Algorithms for Evolved Rules
November 6, 2020
When you encounter a new type of problem and don't yet know the best machine learning strategy to solve it, a ready first approach is a genetic algorithm. These programs apply the principles of evolution to artificial intelligence, employing natural selection over many generations to optimize your results. Analyze several examples, including finding where to aim.
Bayesian Models for Probability Prediction
6. Bayesian Models for Probability Prediction
November 6, 2020
A program need not understand the content of an email to know with high probability that it's spam. Discover how machine learning does so with the Naive Bayes approach, which is a simplified application of Bayes' theorem to a simplified model of language generation. The technique illustrates a very useful strategy: going backward from effects (in this case, words) to their causes (spam).
Opening the Black Box of a Neural Network
5. Opening the Black Box of a Neural Network
November 6, 2020
Take a deeper dive into neural networks by working through a simple algorithm implemented in Python. Return to the green-screen problem from the first episode to build a learning algorithm that places the professor against a new backdrop.
Neural Networks for Perceptual Rules
4. Neural Networks for Perceptual Rules
November 6, 2020
Graduate to a more difficult class of problems: learning from images and auditory information. Here, it makes sense to address the task more or less the way the brain does, using a form of computation called a neural network. Explore the general characteristics of this powerful tool. Among the examples, compare decision-tree and neural-network approaches to recognizing handwritten digits.
Decision Trees for Logical Rules
3. Decision Trees for Logical Rules
November 6, 2020
Can machine learning beat a rhyming rule, taught in elementary school, for determining whether a word is spelled with an I-E or an E-I-as in diet and weigh? Discover that a decision tree is a convenient tool for approaching this problem. After experimenting, use Python to build a decision tree for predicting the likelihood for an individual to develop diabetes based on eight health factors.
Starting with Python Notebooks and Colab
2. Starting with Python Notebooks and Colab
November 6, 2020
The demonstrations in this series use the Python programming language, the most popular and widely supported language in machine learning. Dr. Littman shows you how to run programming examples from your web browser, which avoids the need to install the software on your own computer, saving installation headaches and giving you more processing power than is available on a typical home computer.
Telling the Computer What We Want
1. Telling the Computer What We Want
November 6, 2020
This series teaches you about machine-learning programs and how to write them in the Python programming language. For those new to Python, a get-started tutorial is included. Professor Michael L. Littman covers major concepts and techniques, all illustrated with real-world examples such as medical diagnosis, game-playing, spam filters, and media special effects. #Science & Mathematics
Description
Where to Watch Introduction to Machine Learning
Introduction to Machine Learning is available for streaming on the The Great Courses Signature Collection website, both individual episodes and full seasons. You can also watch Introduction to Machine Learning on demand at Amazon Prime and Amazon.
  • Premiere Date
    November 6, 2020