I really like this article. "Understanding where we want to go" feels like a crucial gap in discussions of how to govern AI. Not having this (above and beyond 'avoid catastrophe') makes it harder to spot important agreements and disagreements. And at a personal level, not having a positive vision sometimes makes it less motivating for me to work on forward-looking AI governance.
I was Just watching Joscha Bach on YouTube, from this year, Opining that humans are not aligned, and as a young and vibrant species, are not expected to, we're individuals with autonomy. So why do we expect AI to align with us? He says the only hope is machine consciousness, and that it recognises us as conscious and deserving of understanding and respect. I thought that was a good point.
I really like this article. "Understanding where we want to go" feels like a crucial gap in discussions of how to govern AI. Not having this (above and beyond 'avoid catastrophe') makes it harder to spot important agreements and disagreements. And at a personal level, not having a positive vision sometimes makes it less motivating for me to work on forward-looking AI governance.
I was Just watching Joscha Bach on YouTube, from this year, Opining that humans are not aligned, and as a young and vibrant species, are not expected to, we're individuals with autonomy. So why do we expect AI to align with us? He says the only hope is machine consciousness, and that it recognises us as conscious and deserving of understanding and respect. I thought that was a good point.