The Gradient
The Gradient: Perspectives on AI
Martin Wattenberg: ML Visualization and Interpretability
2
0:00
-1:42:04

Martin Wattenberg: ML Visualization and Interpretability

On the principles and practice of interaction design, visualization and interpretability for ML systems, and how to understand and explain language models.
2

In episode 99 of The Gradient Podcast, Daniel Bashir speaks to Professor Martin Wattenberg.

Professor Wattenberg is a professor at Harvard and part-time member of Google Research’s People + AI Research (PAIR) initiative, which he co-founded. His work, with long-time collaborator Fernanda Viégas, focuses on making AI technology broadly accessible and reflective of human values. At Google, Professor Wattenberg, his team, and Professor Viégas have created end-user visualizations for products such as Search, YouTube, and Google Analytics. Note: Professor Wattenberg is recruiting PhD students through Harvard SEAS—info here.

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSS
Follow The Gradient on Twitter

Outline:

  • (00:00) Intro

  • (03:30) Prof. Wattenberg’s background

    • (04:40) Financial journalism at SmartMoney

    • (05:35) Contact with the academic visualization world, IBM

    • (07:30) Transition into visualizing ML

  • (08:25) Skepticism of neural networks in the 1980s

  • (09:45) Work at IBM

    • (10:00) Multiple scales in information graphics, organization of information

      • (13:55) How much information should a graphic display to whom?

      • (17:00) Progressive disclosure of complexity in interface design

      • (18:45) Visualization as a rhetorical process

    • (20:45) Conversation Thumbnails for Large-Scale Discussions

      • (21:35) Evolution of conversation interfaces—Slack, etc.

      • (24:20) Path dependence — mutual influences between user behaviors and technology, takeaways for ML interface design

    • (26:30) Baby Names and Social Data Analysis — patterns of interest in baby names

    • (29:50) History Flow

      • (30:05) Why investigate editing dynamics on Wikipedia?

      • (32:06) Implications of editing patterns for design and governance

        • (33:25) The value of visualizations in this work, issues with Wikipedia editing

        • (34:45) Community moderation, bureaucracy

        • (36:20) Consensus and guidelines

          • (37:10) “Neutral” point of view as an organizing principle

      • (38:30) Takeaways

  • PAIR

    • (39:15) Tools for model understanding and “understanding” ML systems

      • (41:10) Intro to PAIR (at Google)

      • (42:00) Unpacking the word “understanding” and use cases

      • (43:00) Historical comparisons for AI development

    • (44:55) The birth of TensorFlow.js

      • (47:52) Democratization of ML

    • (48:45) Visualizing translation — uncovering and telling a story behind the findings

      • (52:10) Shared representations in LLMs and their facility at translation-like tasks

    • (53:50) TCAV

      • (55:30) Explainability and trust

      • (59:10) Writing code with LMs and metaphors for using

  • More recent research

  • (1:31:15) The Shape of Song

    • (1:31:55) What does music look like?

    • (1:35:00) Levels of abstraction, emergent complexity in music and language models

  • (1:37:00) What Prof. Wattenberg hopes to see in ML and interaction design

  • (1:41:18) Outro

Links:

2 Comments
The Gradient
The Gradient: Perspectives on AI
Deeply researched, technical interviews with experts thinking about AI and technology. Hosted, recorded, researched, and produced by Daniel Bashir.