The Gradient
The Gradient: Perspectives on AI
Scott Aaronson: Against AI Doomerism
1
0:00
-1:09:32

Scott Aaronson: Against AI Doomerism

On the limitations of quantum machine learning, watermarking GPT outputs, AI safety, the orthogonality thesis, and doomerist sentiment.
1

In episode 72 of The Gradient Podcast, Daniel Bashir speaks to Professor Scott Aaronson.

Scott is the Schlumberger Centennial Chair of Computer Science at the University of Texas at Austin and director of its Quantum Information Center. His research interests focus on the capabilities and limits of quantum computers and computational complexity theory more broadly. He has recently been on leave to work at OpenAI, where he is researching theoretical foundations of AI safety. 

Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSS
Follow The Gradient on Twitter

Outline:

  • (00:00) Intro

  • (01:45) Scott’s background

  • (02:50) Starting grad school in AI, transitioning to quantum computing and the AI / quantum computing intersection

  • (05:30) Where quantum computers can give us exponential speedups, simulation overhead, Grover’s algorithm

  • (10:50) Overselling of quantum computing applied to AI, Scott’s analysis on quantum machine learning

  • (18:45) ML problems that involve quantum mechanics and Scott’s work

  • (21:50) Scott’s recent work at OpenAI

  • (22:30) Why Scott was skeptical of AI alignment work early on

  • (26:30) Unexpected improvements in modern AI and Scott’s belief update

  • (32:30) Preliminary Analysis of DALL-E 2 (Marcus & Davis)

  • (34:15) Watermarking GPT outputs

  • (41:00) Motivations for watermarking and language model detection

  • (45:00) Ways around watermarking

  • (46:40) Other aspects of Scott’s experience with OpenAI, theoretical problems

  • (49:10) Thoughts on definitions for humanistic concepts in AI

  • (58:45) Scott’s “reform AI alignment stance” and Eliezer Yudkowsky’s recent comments (+ Daniel pronounces Eliezer wrong), orthogonality thesis, cases for stopping scaling

  • (1:08:45) Outro

Links:

1 Comment
The Gradient
The Gradient: Perspectives on AI
Deeply researched, technical interviews with experts thinking about AI and technology. Hosted, recorded, researched, and produced by Daniel Bashir.