In episode 99 of The Gradient Podcast, Daniel Bashir speaks to Professor Martin Wattenberg.
Professor Wattenberg is a professor at Harvard and part-time member of Google Research’s People + AI Research (PAIR) initiative, which he co-founded. His work, with long-time collaborator Fernanda Viégas, focuses on making AI technology broadly accessible and reflective of human values. At Google, Professor Wattenberg, his team, and Professor Viégas have created end-user visualizations for products such as Search, YouTube, and Google Analytics. Note: Professor Wattenberg is recruiting PhD students through Harvard SEAS—info here.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSS
Follow The Gradient on Twitter
Outline:
(00:00) Intro
(03:30) Prof. Wattenberg’s background
(04:40) Financial journalism at SmartMoney
(05:35) Contact with the academic visualization world, IBM
(07:30) Transition into visualizing ML
(08:25) Skepticism of neural networks in the 1980s
(09:45) Work at IBM
(10:00) Multiple scales in information graphics, organization of information
(13:55) How much information should a graphic display to whom?
(17:00) Progressive disclosure of complexity in interface design
(18:45) Visualization as a rhetorical process
(20:45) Conversation Thumbnails for Large-Scale Discussions
(21:35) Evolution of conversation interfaces—Slack, etc.
(24:20) Path dependence — mutual influences between user behaviors and technology, takeaways for ML interface design
(26:30) Baby Names and Social Data Analysis — patterns of interest in baby names
(29:50) History Flow
(30:05) Why investigate editing dynamics on Wikipedia?
(32:06) Implications of editing patterns for design and governance
(33:25) The value of visualizations in this work, issues with Wikipedia editing
(34:45) Community moderation, bureaucracy
(36:20) Consensus and guidelines
(37:10) “Neutral” point of view as an organizing principle
(38:30) Takeaways
PAIR
(39:15) Tools for model understanding and “understanding” ML systems
(41:10) Intro to PAIR (at Google)
(42:00) Unpacking the word “understanding” and use cases
(43:00) Historical comparisons for AI development
(44:55) The birth of TensorFlow.js
(47:52) Democratization of ML
(48:45) Visualizing translation — uncovering and telling a story behind the findings
(52:10) Shared representations in LLMs and their facility at translation-like tasks
(53:50) TCAV
(55:30) Explainability and trust
(59:10) Writing code with LMs and metaphors for using
More recent research
(1:01:05) The System Model and the User Model: Exploring AI Dashboard Design
(1:10:05) OthelloGPT and world models, causality
(1:14:10) Dashboards and interaction design—interfaces and core capabilities
(1:18:07) Reactions to existing LLM interfaces
(1:21:30) Visualizing and Measuring the Geometry of BERT
(1:26:55) Note/Correction: The “Atlas of Meaning” Prof. Wattenberg mentions is called Context Atlas
(1:28:20) Language model tasks and internal representations/geometry
(1:29:30) LLMs as “next word predictors” — explaining systems to people
(1:31:15) The Shape of Song
(1:31:55) What does music look like?
(1:35:00) Levels of abstraction, emergent complexity in music and language models
(1:37:00) What Prof. Wattenberg hopes to see in ML and interaction design
(1:41:18) Outro
Links:
Professor Wattenberg’s homepage and Twitter
Harvard SEAS application info — Professor Wattenberg is recruiting students!
Research
Earlier work
At Harvard and Google / PAIR
Tools for Model Understanding: Facets, SmoothGrad, Attacking discrimination with smarter ML
Other ML papers:
Artwork
Share this post