We Need Positive Visions for AI Grounded in Wellbeing
Joel Lehman and Amanda Ngo demystify "beneficial AI" by grounding the notion in human wellbeing and chart a path for developing it.
I’ve been very excited to share this wonderful piece by Joel Lehman and Amanda Ngo, who’ve both been thinking deeply about how to create socially beneficial AI systems that support human wellbeing for a long time. Here, they articulate a thoughtful vision of what that might look like and how we can get there. — Daniel
Article Preview:
Imagine yourself a decade ago, jumping directly into the present shock of conversing naturally with an encyclopedic AI that crafts images, writes code, and debates philosophy. Won’t this technology almost certainly transform society — and hasn’t AI’s impact on us so far been a mixed-bag? Thus it’s no surprise that so many conversations these days circle around an era-defining question: How do we ensure AI benefits humanity? These conversations often devolve into strident optimism or pessimism about AI, and our earnest aim is to walk a pragmatic middle path, though no doubt we will not perfectly succeed.
I really like this article. "Understanding where we want to go" feels like a crucial gap in discussions of how to govern AI. Not having this (above and beyond 'avoid catastrophe') makes it harder to spot important agreements and disagreements. And at a personal level, not having a positive vision sometimes makes it less motivating for me to work on forward-looking AI governance.
I was Just watching Joscha Bach on YouTube, from this year, Opining that humans are not aligned, and as a young and vibrant species, are not expected to, we're individuals with autonomy. So why do we expect AI to align with us? He says the only hope is machine consciousness, and that it recognises us as conscious and deserving of understanding and respect. I thought that was a good point.