On why AI and nuclear weapons are not the same, what developing (AI) policy looks like, and the ever-present AI governance / innovation tradeoff.
Daniel's interview with Divyansh Kaushik is worth listening to for several different reasons:
1. Divyansh's approach to policy is incredibly practical. While he admit AI involves some serious existential threats, he argues that our responses to AI should focus only on the next wave of negative consequence --- not on the worst scenarios possible. He suggests that our responses will be structurally very similar regardless of a short- or long-term threat threshold.
2. Divyansh's approach offers some hope that politics "on the hill" are not as partisan as it seems in the media. After working for several years with politicians on both sides of our political divide, he has concludes that strong political division abides in only 5% of issues on the hill. In the other 95% of the terrain including much of AI policy debates, Democrats and Republicans are showing some willingness to work together towards pragmatic solutions.
3. Divyansh's approach calls all of us working in the AI field--whether in media, technology, or policy--to shift gears from theory and innovation to practice and implementation. Accordinging to him, it is now time to figure out how to make AI work for us. This will take many long, hard, deliberative conversations between stakeholders. We writers on Substack can be part of the solution. Instead of continuing to play the pro-AI // anti-AI antithesis game, we can now turn our writings to the practical work of figuring out how to implement and integrate AI in a way that does not negate human individuality, autonomy, freedom, privacy, and creativity. No small order...