7 Comments
Jun 21, 2023Liked by daniel bashir

Thanks for The Gradient. Such a valuable resource! I tend to focus on the point of agreement between the doomers and hype beasts. Which is faith that there will be quick, continuous growth to AGI from current models, pretending that big data LLMs are that something that close to close. It seems clear to me there needs to be at least one more breakthrough like the optimization one in -2012. Given so much of the cutting edge is led by big companies (and the CCP!) with big budgets for hardware, IMO that adds a while other set of risks and potential outcomes. Thoughts?

Expand full comment
author

And many thanks for reading!

Expand full comment
author

More to say than I can put in a comment! But yes I think the LLM direction + geopolitical tensions have their own risk/reward scenario (if you’re interested in China-related stuff / their regulations I had a conversation with Matt Sheehan on this a while ago). Re breakthroughs I’m finding myself going back and reading some classics of the connectionist/symbolist debates and I’m really not convinced we’ve become that much more sophisticated in our treatment of some of the core issues eg the hard problems of cognitive science etc

Expand full comment
Jun 21, 2023Liked by daniel bashir

Agreed. I'd say meaningful work and movement in that direction has finally started though. LeCunn's proposed framework that Meta AI recently published their first results on seems like a promising avenue to me. I hear Bengio talk on work he's leading also in the direction of enabling persistent abstraction and "one-shot" learning in the real human sense, not the mostly hype sense that it's used to describe current capabilities.

Expand full comment

This is the Prisoner's Dilemma writ large. You have inspired me to write more about this! Thank you.

Expand full comment
author

I’m really glad to hear this, thank you for reading and I’d love to see what you end up writing!

Expand full comment

Thanks so much. I find myself quite the AI philosopher these days!

Expand full comment