Many intelligent robots have come and gone, failing to become a commercial success. We’ve lost Aibo, Romo, Jibo, Baxter—even Alexa is reducing staff. Perhaps they failed to reach their potential because you can’t have a meaningful conversation with them. We are now at an inflection point: AI has recently made substantial progress, speech recognition now actually works, and we have neural networks in the form of large language models (LLMs) such as ChatGPT and GPT-4 that produce astounding natural language. The problem is that you can’t just have robots make API calls to a generic LLM in the cloud because those models aren’t sufficiently localized for what your robot needs to know. Robots live in the physical world, and so they must take in context and be hyperlocal. This means that they need to be able to learn quickly. Rapid learning is also required for using LLMs for advising in specialized domains, such as science and auto repair.
Discussion about this post
No posts