The Gradient
The Gradient: Perspectives on AI
Percy Liang on Machine Learning Robustness, Foundation Models, and Reproducibility
1×
0:00
Current time: 0:00 / Total time: -50:53
-50:53

Percy Liang on Machine Learning Robustness, Foundation Models, and Reproducibility

An interview with Percy Liang, an Associate Professor of Computer Science at Stanford University and the director of the Center for Research on Foundation Models

In interview 21 of The Gradient Podcast, we talk to Percy Liang, an Associate Professor of Computer Science at Stanford University and the director of the Center for Research on Foundation Models.

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSS
Follow The Gradient on Twitter

Percy Liang’s research spans many topics in machine learning and natural language processing, including robustness, interpretability, semantics, and reasoning.  He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets.  His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and multiple paper awards at ACL, EMNLP, ICML, and COLT.

Sections:

(00:00) Intro
(01:21) Start in AI
(06:52) Interest in Language
(10:17) Start of PhD
(12:22) Semantic Parsing
(17:49) Focus on ML robustness
(22:30) Foundation Models, model robustness
(28:55) Foundation Model bias
(34:48) Foundation Model research by academia
(37:13) Current research interests
(39:40) Surprising robustness results
(44:24) Reproducibility and CodaLab
(50:17) Outro

Papers / Topics discussed:

Share

The Gradient
The Gradient: Perspectives on AI
Deeply researched, technical interviews with experts thinking about AI and technology.