Shuffle for fans

Learning Machines 101

Technology • Science

Smart machines based upon the principles of artificial intelligence and machine learning are now prevalent in our everyday life. For example, artificially intelligent systems recognize our voices, sort our pictures, make purchasing suggestions, and can automatically fly planes and drive cars. InRead more

Popular episodes

LM101-086: Ch8: How to Learn the Probability of Infinitely Many Outcomes

Jul 20 • 35:29

This 86th episode of Learning Machines 101 discusses the problem of assigning probabilities to a possibly infinite set of outcomes in a space-time continuum which characterizes our physical world. Such a set is called an “environmental event”. The machine learning algorithm uses information about the frequency of environmental events to support learning. If we want to stud...

LM101-085:Ch7:How to Guarantee your Batch Learning Algorithm Converges

May 21 • 30:51

This 85th episode of Learning Machines 101 discusses formal convergence guarantees for a broad class of machine learning algorithms designed to minimize smooth non-convex objective functions using batch learning methods. In particular, a broad class of unsupervised, supervised, and reinforcement machine learning algorithms which iteratively update their parameter vector by...

LM101-084: Ch6: How to Analyze the Behavior of Smart Dynamical Systems

Jan 5 • 33:13

In this episode of Learning Machines 101, we review Chapter 6 of my book “Statistical Machine Learning” which introduces methods for analyzing the behavior of machine inference algorithms and machine learning algorithms as dynamical systems. We show that when dynamical systems can be viewed as special types of optimization algorithms, the behavior of those systems even whe...

LM101-083: Ch5: How to Use Calculus to Design Learning Machines

Aug 29 • 34:22

This particular podcast covers the material from Chapter 5 of my new book “Statistical Machine Learning: A unified framework” which is now available! The book chapter shows how matrix calculus is very useful for the analysis and design of both linear and nonlinear learning machines with lots of examples. We discuss how to use the matrix chain rule for deriving deep learnin...

LM101-082: Ch4: How to Analyze and Design Linear Machines

Jul 23 • 29:05

The main focus of this particular episode covers the material in Chapter 4 of my new forthcoming book titled “Statistical Machine Learning: A unified framework.”  Chapter 4 is titled “Linear Algebra for Machine Learning....

LM101-081: Ch3: How to Define Machine Learning (or at Least Try)

Apr 9 • 37:20

This particular podcast covers the material in Chapter 3 of my new book “Statistical Machine Learning: A unified framework” with expected publication date May 2020. In this episode we discuss Chapter 3 of my new book which discusses how to formally define machine learning algorithms. Briefly, a learning machine is viewed as a dynamical system that is minimizing an objectiv...

LM101-080: Ch2: How to Represent Knowledge using Set Theory

Feb 29 • 31:43

LM101-079: Ch1: How to View Learning as Risk Minimization

Dec 24 • 26:07

This particular podcast covers the material in Chapter 1 of my new (unpublished) book “Statistical Machine Learning: A unified framework”. In this episode we discuss Chapter 1 of my new book, which shows how supervised, unsupervised, and reinforcement learning algorithms can be viewed as special cases of a general empirical risk minimization framework. This is useful becau...

LM101-078: Ch0: How to Become a Machine Learning Expert

Oct 24 • 39:18

This particular podcast (Episode 78 of Learning Machines 101) is the initial episode in a new special series of episodes designed to provide commentary on a new book that I am in the process of writing. In this episode we discuss books, software, courses, and podcasts designed to help you become a machine learning expert! For more information, check out: www.learningmachin...

LM101-077: How to Choose the Best Model using BIC

May 2 • 24:15

In this 77th episode of , we explain the proper semantic interpretation of the Bayesian Information Criterion (BIC) and emphasize how this semantic interpretation is fundamentally different from AIC (Akaike Information Criterion) model selection methods. Briefly, BIC is used to estimate the probability of the training data given the probability ...

Check out similar podcasts

韓国カルチャーTalk~omo!のVoice Log
Wisam Sharieff
Muslim Central
Talk Ultra
Ian Corless
TwitterBlogCareersPress KitCommunity GuidelinesTerms of ServicePrivacy Policy
© 2021 Akora Labs, Inc.