In episode 100 of The Gradient Podcast, Daniel Bashir speaks to Professor Thomas Dietterich.
Professor Dietterich is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. He is a pioneer in the field of machine learning, and has authored more than 225 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability. He is a former President of the Association for the Advancement of Artificial Intelligence, and the founding President of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and program chair of AAAI 1990 and NIPS 2000. He currently serves as one of the moderators for the cs.LG category on arXiv.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Follow The Gradient on Twitter
Outline:
(00:00) Episode 100 Note
(02:03) Intro
(04:23) Prof. Dietterich’s background
(14:20) Kuhn and theory development in AI, how Prof Dietterich thinks about the philosophy of science and AI
(20:10) Scales of understanding and sentience, grounding, observable evidence
(23:58) Limits of statistical learning without causal reasoning, systematic understanding
(25:48) A challenge for the ML community: testing for systematicity
(26:13) Forming causal understandings of the world
(28:18) Learning at the Knowledge Level
(29:18) Background and definitions
(32:18) Knowledge and goals, a note on LLMs
(33:03) What it means to learn
(41:05) LLMs as learning results of inference without learning first principles
(43:25) System I/II thinking in humans and LLMs
(47:23) “Routine Science”
(47:38) Solving multiclass learning problems via error-correcting output codes
(52:53) Error-correcting codes and redundancy
(54:48) Why error-correcting codes work, contra intuition
(59:18) Bias in ML
(1:06:23) MAXQ for hierarchical RL
(1:15:48) Computational sustainability
(1:19:53) Project TAHMO’s moonshot
(1:23:28) Anomaly detection for weather stations
(1:25:33) Robustness
(1:27:23) Motivating The Familiarity Hypothesis
(1:27:23) Anomaly detection and self-models of competence
(1:29:25) Measuring the health of freshwater streams
(1:31:55) An open set problem in species detection
(1:33:40) Issues in anomaly detection for deep learning
(1:37:45) The Familiarity Hypothesis
(1:40:15) Mathematical intuitions and the Familiarity Hypothesis
(1:44:12) What’s Wrong with LLMs and What We Should Be Building Instead
(1:46:20) Flaws in LLMs
(1:47:25) The systems Prof Dietterich wants to develop
(1:49:25) Hallucination/confabulation and LLMs vs knowledge bases
(1:54:00) World knowledge and linguistic knowledge
(1:55:07) End-to-end learning and knowledge bases
(1:57:42) Components of an intelligent system and separability
(1:59:06) Thinking through external memory
(2:01:10) Outro
Links:
Research — Fundamentals (Philosophy of AI)
Research – “Routine science”
Ensemble methods in ML and error-correcting output codes
Discovering/Exploiting structure in MDPs:
Research — Ecosystem Informatics and Computational Sustainability
Research — Robustness
Steps towards robust AI (AAAI President’s Address)
Benchmarking NN Robustness to Common Corruptions and Perturbations with Dan Hendrycks
The familiarity hypothesis: Explaining the behavior of deep open set methods
Recent commentary
Share this post