You updated your password.

Reset Password

Enter the email address you used to create your account. We will email you instructions on how to reset your password.

Forgot Your Email Address? Contact Us

Reset Your Password

SHOW
SHOW

Introduction to Machine Learning

Search engines. Navigation systems. Game-playing robots. Learn how smart machines got that way in this course taught by a pioneer researcher in machine learning.
Introduction to Machine Learning is rated 4.0 out of 5 by 29.
  • y_2023, m_3, d_26, h_22
  • bvseo_bulk, prod_bvrr, vn_bulk_3.0.34
  • cp_1, bvpage1
  • co_hasreviews, tv_6, tr_23
  • loc_en_CA, sid_9070, prod, sort_[SortEntry(order=SUBMISSION_TIME, direction=DESCENDING)]
  • clientName_teachco
  • bvseo_sdk, p_sdk, 3.2.1
  • CLOUD, getAggregateRating, 5.31ms
  • REVIEWS, PRODUCT
Rated 5 out of 5 by from Seems Excellent! I am still at the beginning, but it seems that is exactly what I need.. Also the presenter is Active and engaging!
Date published: 2022-06-29
Rated 5 out of 5 by from I watched all 25 lectures Review of Prof. Littman's 'Introduction to Machine Learning (2020)' (4 DVDs). (Review submitted March 6, 2022.) For me, this series was filled with insights. In Lecture 14, I learned the story of how researchers surprisingly won the 2012 ImageNet challenge by 10 percentage points. It was the first time the 'neural network' technique had lived up to its hype of decades earlier. This event ignited a 'deep learning' revolution across computer science (Lectures 16-20) and the researchers received the 2018 Turing Award. When I heard about neural networks in the early 1990s, the sigmoid activation function was typically used. In this series, I learned that a much simpler 'ReLU' activation function is now used so GPUs can dramatically improve the training performance (e.g., to weeks instead of years). The prof showed how a neural network calculation can be written as a simple single-threaded program (Lecture 4). I'd heard of the latent semantic analysis approach (of converting text to numeric vectors) years ago, a linear technique. In this series, I learned about a successor, global vectors ("GloVe" of 2014), which can be downloaded (for English at least), and the prof used them to boost the test scores of a simple text classifier he wrote (Lecture 16). Years ago I heard an expert in text classification say that 'support vector machines' were his favored technique, but I didn't understand then what they were. I was delighted that the prof spent several minutes on this topic (Lecture 9) and provided intuitive visual examples (basically SVMs find the best linear plane separating the 2 types of points). The lectures on Disc 4 were mostly tougher ones, but don't skip Lecture 23, which is an easier one on the recent observation (2018) that 'double descent' is common in many scenarios (more parameters help, then even more cause overfitting, but then even more can improve results again). Often 95% of the network can be pruned away at the end, but it seems the extra random weights are needed at the beginning for learning to be likely to find a good solution, which researchers are trying to better understand. Watching this series, I often got lost the first time watching a lecture (e.g., Lecture 17 on GPT-2). I've had the same problem watching several excellent series before, e.g., Hazen's 'Joy of Science (2001)' and Norden's 'Understanding the Brain (2007)'. I try to understand every detail of a series. My trick is, after watching a lecture, to start watching it again. During this second pass, I have the advantage of knowing what's coming next, often examples or details that make the earlier parts easier to understand. If I still don't understand something, I replay that part repeatedly until I get it (or have to give up). In past series, I've felt I've usually understood at least 95% of the material (to the extent presented) before moving on to the next lecture. In this series, I sometimes moved on at maybe just 80%, often because I didn't try to understand every line of Python code, which I think would have sometimes required consulting online documentation for machine learning libraries (e.g., Scikit-learn and Keras). I often don't watch a lecture in one sitting. I typically watch these series while having dinner. A lecture might get spread over several days if I have to rewatch parts, which I'm willing to do. But typically I don't have time available for 'homework'. I just get what I can out of the lectures. (I did make some exceptions in the case of this series however, as noted below.) Only after watching all the lectures did I try logging into the "Google Colab" via the urls provided. I hadn't heard of this environment before. It was good to be made aware of what I guess computer science students are often using these days. (30 years since I was one of those.) I had no previous Python experience, but I ran the "starter example" of L2 successfully (on Jan 3, 2022), including with some of my own edits to the Python code. Then I randomly picked L17 and tried running the first program, not realizing for a while that Colab was downloading several gigabytes from GitHub (that I presume the later L17 programs need); I stopped it after a few minutes. I was curious how long some of the L4 programs would take to run, but its 2nd program quickly failed with an error (KeyError) on the "X = X[permutation]" line. It was working when the prof ran it for his lecture, so I figured something had changed in the environment, and likely there was a simple fix. After maybe an hour I found the fix, which was to go back to the 1st program where X is set and change its last line from X, y = fetch_openml('mnist_784', version=1, return_X_y=True) to X, y = fetch_openml('mnist_784', version=1, return_X_y=True, as_frame=False) To share it with others, I submitted this fix as part of a "Question" (max 255 characters) on Jan 9, 2022. The reason the fix is needed is that the fetch_openml() api of Scikit-learn changed in Dec 2020 (as one can determine from its online documentation). I haven't tried running all of the code examples, but most of the ones I tried were still working in January 2022. I should mention that I found the Questions and Answers in the guidebook for each lecture were often helpful for clarifying some points, e.g., 'vanishing gradients' doesn't mean the weights become zero but that they aren't changing fast enough (Lecture 15). I appreciated that the prof was careful to say which years various advances were made. Note that a related series which addresses some of the same classification problems using more traditional methods is Williams' 'Learning Statistics (2017)' (which I found to be a much easier series than this one, though more of it was review for me). My path through this Machine Learning series: I started watching it Nov 20, 2021. I raced through the first half; the only lecture I watched twice was L11 on clustering algorithms. Then I watched L14 multiple times in order to understand the convolutional neural networks for computer vision as much as I could. From L16 to the end I watched each lecture at least twice. Then I went back and watched L1 to L15 more carefully to pick up any details I had missed the first time (and they had some forward references that made more sense on the 2nd pass). I called it a wrap Feb 11, 2022. Overall, this series was a boon for me. It's been more than 10 years since I've attended a computer science conference. I feel like this series has brought me up to date.
Date published: 2022-03-06
Rated 5 out of 5 by from Excellent information Still working on it by appreciate the manner and presentation of the information
Date published: 2022-02-21
Rated 5 out of 5 by from Excellent content, scripting and presentation! I have a number of DVD sets from The Great Courses and subscriptions on UDemy. Having DVDs means I have the content somewhat permanently. This is one of the best overview courses I have watched! The content is excellent and the organization is at exactly the right level for covering critical concepts. I bought this for review, but there is always something you learn from a new course. Being able to watch on Roku is a great feature. Why Apple TV isn't supported is beyond me. A little hard to follow the code examples on the TV, but being able to go back and work with the reference material on the computer makes up for it.
Date published: 2022-01-30
Rated 5 out of 5 by from more lerning Received my Machine Learning paket a bit ago. FIrst thing I put CD #1 in my computer. DId not work. Tried again. Same response. Then I put it in my CD player, ( yes I still use it) and it worked. Hoped it would work in the computer. Maybe lyou can tell my why it does not. School started for me with Python on the list..
Date published: 2022-01-02
Rated 1 out of 5 by from Course not yet received Course was ordered two weeks ago and has not been delivered as of 29 Dec
Date published: 2021-12-30
Rated 4 out of 5 by from Knowledgeable Instructor - Basic Info Challenge I was hoping for a more basic approach by the Instructor who I thought was outstanding. However, he assumes the viewer has a basic knowledge already of programming, python, etc. which is needed to understand his presentations
Date published: 2021-11-17
Rated 5 out of 5 by from Learn how to talk to Tech about what you need I thought this course would be for Python coders who want to learn Machine Learning. It is so much more! Each chapter of 20 in this introduction explains real world scenarios on how to apply Machine Learning to real world questions. There is a whole chapter applied to each category such as visual data, audio data, language data, determine the best web site presentation, as well as filter out the 100 best resumes out of 10,000 choices. Machine Learning can match people skills to tasks! Understanding Machine Learning is key to communicating with your technology department about your information need. Machine Learning is so much more than target marketing. Enjoy these engaging lectures. Purchase it for your project managers and employees who work with data. Must have video to see the graphs.
Date published: 2021-07-06
  • y_2023, m_3, d_26, h_22
  • bvseo_bulk, prod_bvrr, vn_bulk_3.0.34
  • cp_1, bvpage1
  • co_hasreviews, tv_6, tr_23
  • loc_en_CA, sid_9070, prod, sort_[SortEntry(order=SUBMISSION_TIME, direction=DESCENDING)]
  • clientName_teachco
  • bvseo_sdk, p_sdk, 3.2.1
  • CLOUD, getReviews, 4.91ms
  • REVIEWS, PRODUCT

Overview

Taught by Professor Michael L. Littman of Brown University, this course teaches you about machine-learning programs and how to write them in the Python programming language. For those new to Python, a get-started" tutorial is included. The professor covers major concepts and techniques, all illustrated with real-world examples such as medical diagnosis, game-playing, spam filters, and media special effects."

About

Michael L. Littman

Join me to understand the mind-bending and truly powerful ways that machine learning is shaping our world and our future.

INSTITUTION

Brown University

Michael L. Littman is the Royce Family Professor of Teaching Excellence in Computer Science at Brown University. He earned his bachelor’s and master’s degrees in Computer Science from Yale University and his PhD in Computer Science from Brown University.

 

Professor Littman’s teaching has received numerous awards, including the Robert B. Cox Award from Duke University, the Warren I. Susman Award for Excellence in Teaching from Rutgers University, and both the Philip J. Bray Award for Excellence in Teaching in the Physical Sciences and the Distinguished Research Achievement Award from Brown University. His research papers have been honored for their lasting impact, earning him the Association for the Advancement of Artificial Intelligence (AAAI) Classic Paper Award at the Twelfth National Conference on Artificial Intelligence and the International Foundation for Autonomous Agents and Multiagent Systems Influential Paper Award at the Eleventh International Conference on Machine Learning.

 

Professor Littman is the codirector of the Humanity Centered Robotics Initiative at Brown University. He served as program cochair for the 26th International Conference on Machine Learning, the 27th AAAI Conference on Artificial Intelligence, and the 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making. He is a fellow of the AAAI, the Association for Computing Machinery, and the Leshner Leadership Institute for Public Engagement with Science.

 

Professor Littman gave two TEDx talks on artificial intelligence, and he appeared in the documentary We Need to Talk about A.I. He also hosts a popular YouTube channel with computer science research videos and educational music videos.

By This Professor

Introduction to Machine Learning
854
Introduction to Machine Learning

Trailer

Telling the Computer What We Want

01: Telling the Computer What We Want

Professor Littman gives a bird’s-eye view of machine learning, covering its history, key concepts, terms, and techniques as a preview for the rest of the course. Look at a simple example involving medical diagnosis. Then focus on a machine-learning program for a video green screen, used widely in television and film. Contrast this with a traditional program to solve the same problem.

31 min
Starting with Python Notebooks and Colab

02: Starting with Python Notebooks and Colab

The demonstrations in this course use the Python programming language, the most popular and widely supported language in machine learning. Dr. Littman shows you how to run programming examples from your web browser, which avoids the need to install the software on your own computer, saving installation headaches and giving you more processing power than is available on a typical home computer.

17 min
Decision Trees for Logical Rules

03: Decision Trees for Logical Rules

Can machine learning beat a rhyming rule, taught in elementary school, for determining whether a word is spelled with an I-E or an E-I—as in “diet” and “weigh”? Discover that a decision tree is a convenient tool for approaching this problem. After experimenting, use Python to build a decision tree for predicting the likelihood for an individual to develop diabetes based on eight health factors.

31 min
Neural Networks for Perceptual Rules

04: Neural Networks for Perceptual Rules

Graduate to a more difficult class of problems: learning from images and auditory information. Here, it makes sense to address the task more or less the way the brain does, using a form of computation called a neural network. Explore the general characteristics of this powerful tool. Among the examples, compare decision-tree and neural-network approaches to recognizing handwritten digits.

30 min
Opening the Black Box of a Neural Network

05: Opening the Black Box of a Neural Network

Take a deeper dive into neural networks by working through a simple algorithm implemented in Python. Return to the green screen problem from the first lecture to build a learning algorithm that places the professor against a new backdrop.

29 min
Bayesian Models for Probability Prediction

06: Bayesian Models for Probability Prediction

A program need not understand the content of an email to know with high probability that it’s spam. Discover how machine learning does so with the Naïve Bayes approach, which is a simplified application of Bayes’ theorem to a simplified model of language generation. The technique illustrates a very useful strategy: going backwards from effects (in this case, words) to their causes (spam).

29 min
Genetic Algorithms for Evolved Rules

07: Genetic Algorithms for Evolved Rules

When you encounter a new type of problem and don’t yet know the best machine learning strategy to solve it, a ready first approach is a genetic algorithm. These programs apply the principles of evolution to artificial intelligence, employing natural selection over many generations to optimize your results. Analyze several examples, including finding where to aim.

28 min
Nearest Neighbors for Using Similarity

08: Nearest Neighbors for Using Similarity

Simple to use and speedy to execute, the nearest neighbor algorithm works on the principle that adjacent elements in a dataset are likely to share similar characteristics. Try out this strategy for determining a comfortable combination of temperature and humidity in a house. Then dive into the problem of malware detection, seeing how the nearest neighbor rule can sort good software from bad.

29 min
The Fundamental Pitfall of Overfitting

09: The Fundamental Pitfall of Overfitting

Having covered the five fundamental classes of machine learning in the previous lessons, now focus on a risk common to all: overfitting. This is the tendency to model training data too well, which can harm the performance on the test data. Practice avoiding this problem using the diabetes dataset from lecture 3. Hear tips on telling the difference between real signals and spurious associations.

28 min
Pitfalls in Applying Machine Learning

10: Pitfalls in Applying Machine Learning

Explore pitfalls that loom when applying machine learning algorithms to real-life problems. For example, see how survival statistics from a boating disaster can easily lead to false conclusions. Also, look at cases from medical care and law enforcement that reveal hidden biases in the way data is interpreted. Since an algorithm is doing the interpreting, understanding what is happening can be a challenge.

28 min
Clustering and Semi-Supervised Learning

11: Clustering and Semi-Supervised Learning

See how a combination of labeled and unlabeled examples can be exploited in machine learning, specifically by using clustering to learn about the data before making use of the labeled examples.

27 min
Recommendations with Three Types of Learning

12: Recommendations with Three Types of Learning

Recommender systems are ubiquitous, from book and movie tips to work aids for professionals. But how do they function? Look at three different approaches to this problem, focusing on Professor Littman’s dilemma as an expert reviewer for conference paper submissions, numbering in the thousands. Also, probe Netflix’s celebrated one-million-dollar prize for an improved recommender algorithm.

30 min
Games with Reinforcement Learning

13: Games with Reinforcement Learning

In 1959, computer pioneer Arthur Samuel popularized the term “machine learning” for his checkers-playing program. Delve into strategies for the board game Othello as you investigate today’s sophisticated algorithms for improving play—at least for the machine. Also explore game-playing tactics for chess, Jeopardy!, poker, and Go, which have been a hotbed for machine-learning research.

30 min
Deep Learning for Computer Vision

14: Deep Learning for Computer Vision

Discover how the ImageNet challenge helped revive the field of neural networks through a technique called deep learning, which is ideal for tasks such as computer vision. Consider the problem of image recognition and the steps deep learning takes to solve it. Dr. Littman throws out his own challenge: Train a computer to distinguish foot files from cheese graters.

27 min
Getting a Deep Learner Back on Track

15: Getting a Deep Learner Back on Track

Roll up your sleeves and debug a deep-learning program. The software is a neural net classifier designed to separate pictures of animals and bugs. In this case, fix the bugs in the code to find the bugs in the images! Professor Littman walks you through diagnostic steps relating to the representational space, the loss function, and the optimizer. It’s an amazing feeling when you finally get the program working well.

30 min
Text Categorization with Words as Vectors

16: Text Categorization with Words as Vectors

Previously, you saw how machine learning is used in spam filtering. Dig deeper into problems of language processing, such as how a computer guesses the word you are typing and possibly even badly misspelling. Focus on the concept of word embeddings, which “define” the meanings of words using vectors in high-dimensional space—a method that involves techniques from linear algebra.

30 min
Deep Networks That Output Language

17: Deep Networks That Output Language

Continue your study of machine learning and language by seeing how computers not only read text, but how they can also generate it. Explore the current state of machine translation, which rivals the skill of human translators. Also, learn how algorithms handle a game that Professor Littman played with his family, where a given phrase is expanded piecemeal to create a story. The results can be quite poetic!

29 min
Making Stylistic Images with Deep Networks

18: Making Stylistic Images with Deep Networks

One way to think about the creative process is as a two-stage operation, involving an idea generator and a discriminator. Study two approaches to image generation using machine learning. In the first, a target image of a pig serves as the discriminator. In the second, the discriminator is programmed to recognize the general characteristics of a pig, which is more how people recognize objects.

29 min
Making Photorealistic Images with GANs

19: Making Photorealistic Images with GANs

A new approach to image generation and discrimination pits both processes against each other in a “generative adversarial network,” or GAN. The technique can produce a new image based on a reference class, for example making a person look older or younger, or automatically filling in a landscape after a building has been removed. GANs have great potential for creativity and, unfortunately, fraud.

30 min
Deep Learning for Speech Recognition

20: Deep Learning for Speech Recognition

Consider the problem of speech recognition and the quest, starting in the 1950s, to program computers for this task. Then delve into algorithms that machine-learning uses to create today’s sophisticated speech recognition systems. Get a taste of the technology by training with deep-learning software for recognizing simple words. Finally, look ahead to the prospect of conversing computers.

30 min
Inverse Reinforcement Learning from People

21: Inverse Reinforcement Learning from People

Are you no good at programming? Machine learning can a give a demonstration, predict what you want, and suggest improvements. For example, inverse reinforcement turns the tables on the following logical relation, “if you are a horse and like carrots, go to the carrot.” Inverse reinforcement looks at it like this: “if you see a horse go to the carrot, it might be because the horse likes carrots.”

29 min
Causal Inference Comes to Machine Learning

22: Causal Inference Comes to Machine Learning

Get acquainted with a powerful new tool in machine learning, causal inference, which addresses a key limitation of classical methods—the focus on correlation to the exclusion of causation. Practice with a historic problem of causation: the link between cigarette smoking and cancer, which will always be obscured by confounding factors. Also look at other cases of correlation versus causation.

30 min
The Unexpected Power of Over-Parameterization

23: The Unexpected Power of Over-Parameterization

Probe the deep-learning revolution that took place around 2015, conquering worries about overfitting data due to the use of too many parameters. Dr. Littman sets the stage by taking you back to his undergraduate psychology class, taught by one of The Great Courses’ original professors. Chart the breakthrough that paved the way for deep networks that can tackle hard, real-world learning problems.

30 min
Protecting Privacy within Machine Learning

24: Protecting Privacy within Machine Learning

Machine learning is both a cause and a cure for privacy concerns. Hear about two notorious cases where de-identified data was unmasked. Then, step into the role of a computer security analyst, evaluating different threats, including pattern recognition and compromised medical records. Discover how to think like a digital snoop and evaluate different strategies for thwarting an attack.

31 min
Mastering the Machine Learning Process

25: Mastering the Machine Learning Process

Finish the course with a lightning tour of meta-learning—algorithms that learn how to learn, making it possible to solve problems that are otherwise unmanageable. Examine two approaches: one that reasons about discrete problems using satisfiability solvers and another that allows programmers to optimize continuous models. Close with a glimpse of the future for this astounding field.

34 min