We consider responses to recent letters calling for pauses on AI experiments and a memory-efficient optimization method that fine-tunes LLMs without using gradients.
Thanks for The Gradient. Such a valuable resource! I tend to focus on the point of agreement between the doomers and hype beasts. Which is faith that there will be quick, continuous growth to AGI from current models, pretending that big data LLMs are that something that close to close. It seems clear to me there needs to be at least one more breakthrough like the optimization one in -2012. Given so much of the cutting edge is led by big companies (and the CCP!) with big budgets for hardware, IMO that adds a while other set of risks and potential outcomes. Thoughts?
More to say than I can put in a comment! But yes I think the LLM direction + geopolitical tensions have their own risk/reward scenario (if you’re interested in China-related stuff / their regulations I had a conversation with Matt Sheehan on this a while ago). Re breakthroughs I’m finding myself going back and reading some classics of the connectionist/symbolist debates and I’m really not convinced we’ve become that much more sophisticated in our treatment of some of the core issues eg the hard problems of cognitive science etc
Agreed. I'd say meaningful work and movement in that direction has finally started though. LeCunn's proposed framework that Meta AI recently published their first results on seems like a promising avenue to me. I hear Bengio talk on work he's leading also in the direction of enabling persistent abstraction and "one-shot" learning in the real human sense, not the mostly hype sense that it's used to describe current capabilities.
Thanks for The Gradient. Such a valuable resource! I tend to focus on the point of agreement between the doomers and hype beasts. Which is faith that there will be quick, continuous growth to AGI from current models, pretending that big data LLMs are that something that close to close. It seems clear to me there needs to be at least one more breakthrough like the optimization one in -2012. Given so much of the cutting edge is led by big companies (and the CCP!) with big budgets for hardware, IMO that adds a while other set of risks and potential outcomes. Thoughts?
And many thanks for reading!
More to say than I can put in a comment! But yes I think the LLM direction + geopolitical tensions have their own risk/reward scenario (if you’re interested in China-related stuff / their regulations I had a conversation with Matt Sheehan on this a while ago). Re breakthroughs I’m finding myself going back and reading some classics of the connectionist/symbolist debates and I’m really not convinced we’ve become that much more sophisticated in our treatment of some of the core issues eg the hard problems of cognitive science etc
Agreed. I'd say meaningful work and movement in that direction has finally started though. LeCunn's proposed framework that Meta AI recently published their first results on seems like a promising avenue to me. I hear Bengio talk on work he's leading also in the direction of enabling persistent abstraction and "one-shot" learning in the real human sense, not the mostly hype sense that it's used to describe current capabilities.
This is the Prisoner's Dilemma writ large. You have inspired me to write more about this! Thank you.
I’m really glad to hear this, thank you for reading and I’d love to see what you end up writing!
Thanks so much. I find myself quite the AI philosopher these days!