You updated your password.

Reset Password

Enter the email address you used to create your account. We will email you instructions on how to reset your password.

Forgot Your Email Address? Contact Us

Reset Your Password

SHOW
SHOW

Introduction to Machine Learning

Search engines. Navigation systems. Game-playing robots. Learn how smart machines got that way in this course taught by a pioneer researcher in machine learning.
Introduction to Machine Learning is rated 4.0 out of 5 by 32.
  • y_2024, m_3, d_28, h_10
  • bvseo_bulk, prod_bvrr, vn_bulk_3.0.38
  • cp_1, bvpage1
  • co_hasreviews, tv_6, tr_26
  • loc_en_CA, sid_9070, prod, sort_[SortEntry(order=SUBMISSION_TIME, direction=DESCENDING)]
  • clientName_teachco
  • bvseo_sdk, p_sdk, 3.2.1
  • CLOUD, getAggregateRating, 5.47ms
  • REVIEWS, PRODUCT
Rated 5 out of 5 by from Great course for the right student I am a quarter of the way through this course. I have a college degree in maths and stats from 20 years ago. To really get the most out of it there is some prerequisite, I would say a good college maths (matrix algebra, multivariable optimisation etc) and some basic programming. If you just want to know how to apply ML in a black box way, you can still do this from this course, but to fully engage and master it fully I'd suggest this course supplemented with a good ML textbook as indicated in the suggested readings at the end of each chapter of the provided guidebook.
Date published: 2024-01-22
Rated 5 out of 5 by from AI Machine Learning Under the Hood NOTES My goal for viewing the offering was a detailed update to the AI algorithms landscape, especially between 1980 (my graduate academic timeframe) and now, and this goal was masterfully achieved by Littman for work up to 2020. The annotated Bibliography in the Guidebook is spot on for historic research and innovation background, as is the presentation itself. So despite serious caveats, however dated, my score is to strongly recommend (with strong issues and caveats). Prof Littman gives a great overview of what happens in the "code behind" or "back end" under the hood of AI projects with "Machine Learning". LLM (Large Language Models) like Microsoft ChatBots and Google Bard, and MLLM (Multi-modal Large Language Models) like Apple "Visual Look Up" (the (i) with sparkle that recognizes plants, landmarks, artworks), use the technology presented here, with an impressive variety of "front ends". If you care about the LLM "front end", a very recent book by Microsoft (in edited collaboration with Google) would be "Modern Generative AI with ChatGPT and OpenAI" by Alto. Currently, in 2023, this "front end" application domain software is incredibly formative and would easily be a separate TGC/Wondrium offering. The current AI texts recommended in the Guidebook are middleweight, and occasionally misguided, as Littman actually points out. However, the "The Master Algorithm", and the Russell and Norvig "Artificial Intelligence" are actually the ones used in current AI academic coursework, and their agenda of AI subject topics are mimicked by this presentation. I followed the presentations with the "Master Algorithm" book, because it was an enthusiastic consolidation attempt, and seemed the lesser of overview evils. The point is that the AI subject topics in this offering are those currently presented academically, for all AI. The background needed to understand this material include computer programming, matrix and vector algebra (linear algebra or advanced engineering math), statistics including correlation, geometric analysis or calculus. This is graduate or advanced undergrad Batchelor of Science first overview material. Be prepared for a challenge, all ye who enter. The browser-based Python code notebooks were runnable and editable, in MSFT Edge or Google Chrome. Great way to teach! However, STRONG CAVEAT, there are errors in the CoLab Python due to Google access and version updates to Python libraries. Around half of the chapter Python examples have issues. So expect to click the "Stack Overflow" link to find out how to fix the code issues, and debug. (Good luck to newbies and dilettantes.) Some problems were not easily fixable, like the Chapter 22 IBM libraries. The topics include: * Chapters 1 and 2, AI overview and Python notebooks and CoLab use. * Chapters 3 to 8 are the five "schools of machine learning", including decision trees, neural networks and deep learning, Bayesian networks, genetic algorithms, and nearest neighbors. * Chapters 9 and 10 show machine learning pitfalls. * Chapter 11 semi-supervised clustering similarity solution AI space. * Chapter 12 covers recommendations implementation (web reviews, reviewer assignment, ChatBots) three forms (unsupervised, passive supervised, active supervised). Current ChatBots may be a variant of the active supervised, where the active component is an actual user participant. * Chapter 13 shows AI reinforcement learning, gaming strategy. * Chapters 14, 18, 19 Neural network deep learning "computer vision" photo recognition and matching. Think Apple iOS 15 "Visual Look Up" (the iPad and iPhone photo (i) with sparkle that recognizes plants, landmarks), aka MLLM (Multimodal Large Language Models). Also, deep fakes, animation, generative visual. * Chapters 15, 16, 17, 20 are the text and speech LLM (Large Language Model) recognition and composition AI text techniques used for ChatBots like ChatGPT, text correction, etc. * Chapter 21 discusses learning from people, but agonizingly avoids search sessions, and dwells on robotics and devices. The AI "Terminator" issue is discussed. * Chapter 22 studies "Bayesian causal inference" machine learning, which can produce directed graphs that show causal effects. * Chapter 23 analyzes "overparameterization" benefits, when data is massive. * Chapter 24 examines security issues. * Chapter 25 is the best summary of the current and future machine learning zeitgeist I've seen (more below). My grandsons have used the Bing ChatBot to produce heartwarming poems about each other, their parents, and we grandparents. WOW! They even did both haiku (their update) and "dactylic hexameter" (like Longfellow's Evangeline, at my comedic suggestion) updates. Suddenly, Grandpa's knows how to play, and it's something other than Fortnite! The process of fun becomes a computation request for an intellectual project (like a poem or a story) with ongoing feedback expansion and correction; an escalade, a spiral upward. True search, true education, improved composition. Lesson 25 of this offering concludes: "...creating a machine learning program will look less like training a machine and more like teaching a living being. And humans will indeed be the masters. Whether we’ll be a force for good is up to us." And up to our grandchildren. ISSUES and CAVEATS *My opinion is that AI and Machine Learning teaching should also include a history and comparison of the tools landscape, given the assurance (as Chapter 25 implies) that it will change for the better. IMHO Python is OK but may be an interloper confined to prototypes and academia. The initial overhypes and misuse of LISP and Scheme for large projects led to "AI Winters", and discussions of enterprise tools really should not be avoided. *More material on ethics and regulation is necessary. Littman is clear that AI machine learning is essential to the attempted larceny and total waste of the vast majority of email. Attempted fraud is punished unless it's email, reviews, web product offerings; and this fraud is a tremendous waste of time and resources. My suspicion is that the next "AI Winter" will be due to the lack of (or worse, really bad) regulation. *What a multitude of AI developers will be paid for is merging "back end" AI ideas into generic medical, legal, financial, "Office" and Web CoPilot and Assistant apps resembling the various new search ChatBots. The "Front End" topic is major. This offering is dated before any of the guidance (especially by Microsoft) on front end development patterns for ChatBots and other "Assistants". *The "Limits of Computation" for AI, and Godel, Turing, Smale's 18th problem, and the deep meaning of "intelligence", are not really addressed. No paradoxes, oxymorons, unclear solutions; no Godel or Turing (on computation limits), no "Godel, Escher, Bach". The problem here is that claims have been made about "sentient automata" and "Master Algorithms", now or in the future, which great thinking by giants of "intelligence" have shown won't be possible. I refer to PNAS "The difficulty of computing stable and accurate neural networks: On the barriers of deep learning and Smale’s 18th problem" by Colbrook et al, and to the "Pitt" edu teaching site for "Paradoxes of Impossibility: Computation" by Norton. Perhaps it's better addressed in a different offering by TGC or Wondrium, but nipping that science fiction hype in the bud is best done early and often, to deter the misunderstandings that cause "AI Winters". A great example of such science fiction is 1970's "Player Piano" by Kurt Vonnegut, a convincing story about the 95% worldwide unemployment that would occur by 1990, all due to computer technology. A more recent example is "AI 2041" by Kai-Fu Lee, who leads AI efforts by Google. Go figure. SUMMARY AI has gone acceptably mainstream. I'll almost certainly be using this in Visual Studio Enterprise and Excel VSTO medical projects, and other fun user stuff. Thanks a bunch, Prof Littman and TGC/ Wondrium.
Date published: 2023-09-02
Rated 1 out of 5 by from Notebook problems A well thought out course. BUT! Some of the Jupyter notebooks do not run.
Date published: 2023-08-09
Rated 5 out of 5 by from Seems Excellent! I am still at the beginning, but it seems that is exactly what I need.. Also the presenter is Active and engaging!
Date published: 2022-06-29
Rated 5 out of 5 by from I watched all 25 lectures Review of Prof. Littman's 'Introduction to Machine Learning (2020)' (4 DVDs). (Review submitted March 6, 2022.) For me, this series was filled with insights. In Lecture 14, I learned the story of how researchers surprisingly won the 2012 ImageNet challenge by 10 percentage points. It was the first time the 'neural network' technique had lived up to its hype of decades earlier. This event ignited a 'deep learning' revolution across computer science (Lectures 16-20) and the researchers received the 2018 Turing Award. When I heard about neural networks in the early 1990s, the sigmoid activation function was typically used. In this series, I learned that a much simpler 'ReLU' activation function is now used so GPUs can dramatically improve the training performance (e.g., to weeks instead of years). The prof showed how a neural network calculation can be written as a simple single-threaded program (Lecture 4). I'd heard of the latent semantic analysis approach (of converting text to numeric vectors) years ago, a linear technique. In this series, I learned about a successor, global vectors ("GloVe" of 2014), which can be downloaded (for English at least), and the prof used them to boost the test scores of a simple text classifier he wrote (Lecture 16). Years ago I heard an expert in text classification say that 'support vector machines' were his favored technique, but I didn't understand then what they were. I was delighted that the prof spent several minutes on this topic (Lecture 9) and provided intuitive visual examples (basically SVMs find the best linear plane separating the 2 types of points). The lectures on Disc 4 were mostly tougher ones, but don't skip Lecture 23, which is an easier one on the recent observation (2018) that 'double descent' is common in many scenarios (more parameters help, then even more cause overfitting, but then even more can improve results again). Often 95% of the network can be pruned away at the end, but it seems the extra random weights are needed at the beginning for learning to be likely to find a good solution, which researchers are trying to better understand. Watching this series, I often got lost the first time watching a lecture (e.g., Lecture 17 on GPT-2). I've had the same problem watching several excellent series before, e.g., Hazen's 'Joy of Science (2001)' and Norden's 'Understanding the Brain (2007)'. I try to understand every detail of a series. My trick is, after watching a lecture, to start watching it again. During this second pass, I have the advantage of knowing what's coming next, often examples or details that make the earlier parts easier to understand. If I still don't understand something, I replay that part repeatedly until I get it (or have to give up). In past series, I've felt I've usually understood at least 95% of the material (to the extent presented) before moving on to the next lecture. In this series, I sometimes moved on at maybe just 80%, often because I didn't try to understand every line of Python code, which I think would have sometimes required consulting online documentation for machine learning libraries (e.g., Scikit-learn and Keras). I often don't watch a lecture in one sitting. I typically watch these series while having dinner. A lecture might get spread over several days if I have to rewatch parts, which I'm willing to do. But typically I don't have time available for 'homework'. I just get what I can out of the lectures. (I did make some exceptions in the case of this series however, as noted below.) Only after watching all the lectures did I try logging into the "Google Colab" via the urls provided. I hadn't heard of this environment before. It was good to be made aware of what I guess computer science students are often using these days. (30 years since I was one of those.) I had no previous Python experience, but I ran the "starter example" of L2 successfully (on Jan 3, 2022), including with some of my own edits to the Python code. Then I randomly picked L17 and tried running the first program, not realizing for a while that Colab was downloading several gigabytes from GitHub (that I presume the later L17 programs need); I stopped it after a few minutes. I was curious how long some of the L4 programs would take to run, but its 2nd program quickly failed with an error (KeyError) on the "X = X[permutation]" line. It was working when the prof ran it for his lecture, so I figured something had changed in the environment, and likely there was a simple fix. After maybe an hour I found the fix, which was to go back to the 1st program where X is set and change its last line from X, y = fetch_openml('mnist_784', version=1, return_X_y=True) to X, y = fetch_openml('mnist_784', version=1, return_X_y=True, as_frame=False) To share it with others, I submitted this fix as part of a "Question" (max 255 characters) on Jan 9, 2022. The reason the fix is needed is that the fetch_openml() api of Scikit-learn changed in Dec 2020 (as one can determine from its online documentation). I haven't tried running all of the code examples, but most of the ones I tried were still working in January 2022. I should mention that I found the Questions and Answers in the guidebook for each lecture were often helpful for clarifying some points, e.g., 'vanishing gradients' doesn't mean the weights become zero but that they aren't changing fast enough (Lecture 15). I appreciated that the prof was careful to say which years various advances were made. Note that a related series which addresses some of the same classification problems using more traditional methods is Williams' 'Learning Statistics (2017)' (which I found to be a much easier series than this one, though more of it was review for me). My path through this Machine Learning series: I started watching it Nov 20, 2021. I raced through the first half; the only lecture I watched twice was L11 on clustering algorithms. Then I watched L14 multiple times in order to understand the convolutional neural networks for computer vision as much as I could. From L16 to the end I watched each lecture at least twice. Then I went back and watched L1 to L15 more carefully to pick up any details I had missed the first time (and they had some forward references that made more sense on the 2nd pass). I called it a wrap Feb 11, 2022. Overall, this series was a boon for me. It's been more than 10 years since I've attended a computer science conference. I feel like this series has brought me up to date.
Date published: 2022-03-06
Rated 5 out of 5 by from Excellent information Still working on it by appreciate the manner and presentation of the information
Date published: 2022-02-21
Rated 5 out of 5 by from Excellent content, scripting and presentation! I have a number of DVD sets from The Great Courses and subscriptions on UDemy. Having DVDs means I have the content somewhat permanently. This is one of the best overview courses I have watched! The content is excellent and the organization is at exactly the right level for covering critical concepts. I bought this for review, but there is always something you learn from a new course. Being able to watch on Roku is a great feature. Why Apple TV isn't supported is beyond me. A little hard to follow the code examples on the TV, but being able to go back and work with the reference material on the computer makes up for it.
Date published: 2022-01-30
Rated 5 out of 5 by from more lerning Received my Machine Learning paket a bit ago. FIrst thing I put CD #1 in my computer. DId not work. Tried again. Same response. Then I put it in my CD player, ( yes I still use it) and it worked. Hoped it would work in the computer. Maybe lyou can tell my why it does not. School started for me with Python on the list..
Date published: 2022-01-02
  • y_2024, m_3, d_28, h_10
  • bvseo_bulk, prod_bvrr, vn_bulk_3.0.38
  • cp_1, bvpage1
  • co_hasreviews, tv_6, tr_26
  • loc_en_CA, sid_9070, prod, sort_[SortEntry(order=SUBMISSION_TIME, direction=DESCENDING)]
  • clientName_teachco
  • bvseo_sdk, p_sdk, 3.2.1
  • CLOUD, getReviews, 4.7ms
  • REVIEWS, PRODUCT

Overview

Taught by Professor Michael L. Littman of Brown University, this course teaches you about machine-learning programs and how to write them in the Python programming language. For those new to Python, a get-started" tutorial is included. The professor covers major concepts and techniques, all illustrated with real-world examples such as medical diagnosis, game-playing, spam filters, and media special effects."

About

Michael L. Littman

Join me to understand the mind-bending and truly powerful ways that machine learning is shaping our world and our future.

INSTITUTION

Brown University

Michael L. Littman is the Royce Family Professor of Teaching Excellence in Computer Science at Brown University. He earned his bachelor’s and master’s degrees in Computer Science from Yale University and his PhD in Computer Science from Brown University.

 

Professor Littman’s teaching has received numerous awards, including the Robert B. Cox Award from Duke University, the Warren I. Susman Award for Excellence in Teaching from Rutgers University, and both the Philip J. Bray Award for Excellence in Teaching in the Physical Sciences and the Distinguished Research Achievement Award from Brown University. His research papers have been honored for their lasting impact, earning him the Association for the Advancement of Artificial Intelligence (AAAI) Classic Paper Award at the Twelfth National Conference on Artificial Intelligence and the International Foundation for Autonomous Agents and Multiagent Systems Influential Paper Award at the Eleventh International Conference on Machine Learning.

 

Professor Littman is the codirector of the Humanity Centered Robotics Initiative at Brown University. He served as program cochair for the 26th International Conference on Machine Learning, the 27th AAAI Conference on Artificial Intelligence, and the 4th Multidisciplinary Conference on Reinforcement Learning and Decision Making. He is a fellow of the AAAI, the Association for Computing Machinery, and the Leshner Leadership Institute for Public Engagement with Science.

 

Professor Littman gave two TEDx talks on artificial intelligence, and he appeared in the documentary We Need to Talk about A.I. He also hosts a popular YouTube channel with computer science research videos and educational music videos.

By This Professor

Introduction to Machine Learning
854
Introduction to Machine Learning

Trailer

Telling the Computer What We Want

01: Telling the Computer What We Want

Professor Littman gives a bird’s-eye view of machine learning, covering its history, key concepts, terms, and techniques as a preview for the rest of the course. Look at a simple example involving medical diagnosis. Then focus on a machine-learning program for a video green screen, used widely in television and film. Contrast this with a traditional program to solve the same problem.

31 min
Starting with Python Notebooks and Colab

02: Starting with Python Notebooks and Colab

The demonstrations in this course use the Python programming language, the most popular and widely supported language in machine learning. Dr. Littman shows you how to run programming examples from your web browser, which avoids the need to install the software on your own computer, saving installation headaches and giving you more processing power than is available on a typical home computer.

17 min
Decision Trees for Logical Rules

03: Decision Trees for Logical Rules

Can machine learning beat a rhyming rule, taught in elementary school, for determining whether a word is spelled with an I-E or an E-I—as in “diet” and “weigh”? Discover that a decision tree is a convenient tool for approaching this problem. After experimenting, use Python to build a decision tree for predicting the likelihood for an individual to develop diabetes based on eight health factors.

31 min
Neural Networks for Perceptual Rules

04: Neural Networks for Perceptual Rules

Graduate to a more difficult class of problems: learning from images and auditory information. Here, it makes sense to address the task more or less the way the brain does, using a form of computation called a neural network. Explore the general characteristics of this powerful tool. Among the examples, compare decision-tree and neural-network approaches to recognizing handwritten digits.

30 min
Opening the Black Box of a Neural Network

05: Opening the Black Box of a Neural Network

Take a deeper dive into neural networks by working through a simple algorithm implemented in Python. Return to the green screen problem from the first lecture to build a learning algorithm that places the professor against a new backdrop.

29 min
Bayesian Models for Probability Prediction

06: Bayesian Models for Probability Prediction

A program need not understand the content of an email to know with high probability that it’s spam. Discover how machine learning does so with the Naïve Bayes approach, which is a simplified application of Bayes’ theorem to a simplified model of language generation. The technique illustrates a very useful strategy: going backwards from effects (in this case, words) to their causes (spam).

29 min
Genetic Algorithms for Evolved Rules

07: Genetic Algorithms for Evolved Rules

When you encounter a new type of problem and don’t yet know the best machine learning strategy to solve it, a ready first approach is a genetic algorithm. These programs apply the principles of evolution to artificial intelligence, employing natural selection over many generations to optimize your results. Analyze several examples, including finding where to aim.

28 min
Nearest Neighbors for Using Similarity

08: Nearest Neighbors for Using Similarity

Simple to use and speedy to execute, the nearest neighbor algorithm works on the principle that adjacent elements in a dataset are likely to share similar characteristics. Try out this strategy for determining a comfortable combination of temperature and humidity in a house. Then dive into the problem of malware detection, seeing how the nearest neighbor rule can sort good software from bad.

29 min
The Fundamental Pitfall of Overfitting

09: The Fundamental Pitfall of Overfitting

Having covered the five fundamental classes of machine learning in the previous lessons, now focus on a risk common to all: overfitting. This is the tendency to model training data too well, which can harm the performance on the test data. Practice avoiding this problem using the diabetes dataset from lecture 3. Hear tips on telling the difference between real signals and spurious associations.

28 min
Pitfalls in Applying Machine Learning

10: Pitfalls in Applying Machine Learning

Explore pitfalls that loom when applying machine learning algorithms to real-life problems. For example, see how survival statistics from a boating disaster can easily lead to false conclusions. Also, look at cases from medical care and law enforcement that reveal hidden biases in the way data is interpreted. Since an algorithm is doing the interpreting, understanding what is happening can be a challenge.

28 min
Clustering and Semi-Supervised Learning

11: Clustering and Semi-Supervised Learning

See how a combination of labeled and unlabeled examples can be exploited in machine learning, specifically by using clustering to learn about the data before making use of the labeled examples.

27 min
Recommendations with Three Types of Learning

12: Recommendations with Three Types of Learning

Recommender systems are ubiquitous, from book and movie tips to work aids for professionals. But how do they function? Look at three different approaches to this problem, focusing on Professor Littman’s dilemma as an expert reviewer for conference paper submissions, numbering in the thousands. Also, probe Netflix’s celebrated one-million-dollar prize for an improved recommender algorithm.

30 min
Games with Reinforcement Learning

13: Games with Reinforcement Learning

In 1959, computer pioneer Arthur Samuel popularized the term “machine learning” for his checkers-playing program. Delve into strategies for the board game Othello as you investigate today’s sophisticated algorithms for improving play—at least for the machine. Also explore game-playing tactics for chess, Jeopardy!, poker, and Go, which have been a hotbed for machine-learning research.

30 min
Deep Learning for Computer Vision

14: Deep Learning for Computer Vision

Discover how the ImageNet challenge helped revive the field of neural networks through a technique called deep learning, which is ideal for tasks such as computer vision. Consider the problem of image recognition and the steps deep learning takes to solve it. Dr. Littman throws out his own challenge: Train a computer to distinguish foot files from cheese graters.

27 min
Getting a Deep Learner Back on Track

15: Getting a Deep Learner Back on Track

Roll up your sleeves and debug a deep-learning program. The software is a neural net classifier designed to separate pictures of animals and bugs. In this case, fix the bugs in the code to find the bugs in the images! Professor Littman walks you through diagnostic steps relating to the representational space, the loss function, and the optimizer. It’s an amazing feeling when you finally get the program working well.

30 min
Text Categorization with Words as Vectors

16: Text Categorization with Words as Vectors

Previously, you saw how machine learning is used in spam filtering. Dig deeper into problems of language processing, such as how a computer guesses the word you are typing and possibly even badly misspelling. Focus on the concept of word embeddings, which “define” the meanings of words using vectors in high-dimensional space—a method that involves techniques from linear algebra.

30 min
Deep Networks That Output Language

17: Deep Networks That Output Language

Continue your study of machine learning and language by seeing how computers not only read text, but how they can also generate it. Explore the current state of machine translation, which rivals the skill of human translators. Also, learn how algorithms handle a game that Professor Littman played with his family, where a given phrase is expanded piecemeal to create a story. The results can be quite poetic!

29 min
Making Stylistic Images with Deep Networks

18: Making Stylistic Images with Deep Networks

One way to think about the creative process is as a two-stage operation, involving an idea generator and a discriminator. Study two approaches to image generation using machine learning. In the first, a target image of a pig serves as the discriminator. In the second, the discriminator is programmed to recognize the general characteristics of a pig, which is more how people recognize objects.

29 min
Making Photorealistic Images with GANs

19: Making Photorealistic Images with GANs

A new approach to image generation and discrimination pits both processes against each other in a “generative adversarial network,” or GAN. The technique can produce a new image based on a reference class, for example making a person look older or younger, or automatically filling in a landscape after a building has been removed. GANs have great potential for creativity and, unfortunately, fraud.

30 min
Deep Learning for Speech Recognition

20: Deep Learning for Speech Recognition

Consider the problem of speech recognition and the quest, starting in the 1950s, to program computers for this task. Then delve into algorithms that machine-learning uses to create today’s sophisticated speech recognition systems. Get a taste of the technology by training with deep-learning software for recognizing simple words. Finally, look ahead to the prospect of conversing computers.

30 min
Inverse Reinforcement Learning from People

21: Inverse Reinforcement Learning from People

Are you no good at programming? Machine learning can a give a demonstration, predict what you want, and suggest improvements. For example, inverse reinforcement turns the tables on the following logical relation, “if you are a horse and like carrots, go to the carrot.” Inverse reinforcement looks at it like this: “if you see a horse go to the carrot, it might be because the horse likes carrots.”

29 min
Causal Inference Comes to Machine Learning

22: Causal Inference Comes to Machine Learning

Get acquainted with a powerful new tool in machine learning, causal inference, which addresses a key limitation of classical methods—the focus on correlation to the exclusion of causation. Practice with a historic problem of causation: the link between cigarette smoking and cancer, which will always be obscured by confounding factors. Also look at other cases of correlation versus causation.

30 min
The Unexpected Power of Over-Parameterization

23: The Unexpected Power of Over-Parameterization

Probe the deep-learning revolution that took place around 2015, conquering worries about overfitting data due to the use of too many parameters. Dr. Littman sets the stage by taking you back to his undergraduate psychology class, taught by one of The Great Courses’ original professors. Chart the breakthrough that paved the way for deep networks that can tackle hard, real-world learning problems.

30 min
Protecting Privacy within Machine Learning

24: Protecting Privacy within Machine Learning

Machine learning is both a cause and a cure for privacy concerns. Hear about two notorious cases where de-identified data was unmasked. Then, step into the role of a computer security analyst, evaluating different threats, including pattern recognition and compromised medical records. Discover how to think like a digital snoop and evaluate different strategies for thwarting an attack.

31 min
Mastering the Machine Learning Process

25: Mastering the Machine Learning Process

Finish the course with a lightning tour of meta-learning—algorithms that learn how to learn, making it possible to solve problems that are otherwise unmanageable. Examine two approaches: one that reasons about discrete problems using satisfiability solvers and another that allows programmers to optimize continuous models. Close with a glimpse of the future for this astounding field.

34 min