A general comment on the 1839 Awards. I've noticed a trend in commentary on AI developments - not this Substack, but in general - to move goalposts as a form of, as the kids say, "cope." People will celebrate a human tricking human judges in an AI image generation competition with a "real" image, or say that current AI is garbage and overhyped because it cannot meet one specialized use case that, if they understood how the systems work, they should not expect it to be capable of. People are still joking that image generators cannot draw hands at all, but if you use a paid image model like Midjourney you know that is a solved problem - in the rare case that a human hand is not rendered correctly, a user can fix it in seconds with inpainting. A good rule of thumb: whatever media is telling you an AI system "cannot" do, check on the frontier paid models, and in the open source literature: chances are someone has worked out how to do just the thing you said is impossible (though the real question is whether it can be done consistently at scale).
I definitely believe that businesses and VCs have every motive to overhype AI, and that they are, dishonestly or honestly, doing so (it is easier to hype what you actually believe). But if the hype is "artificial general intelligence by 2030," the alternative is not "no progress at all from now to 2030." it's a vast range of possible futures, not one of which is that AI systems become less capable than they are today. The transformation doesn't come when the AI is, at economically valuable task X, better than the best human being, but when it's better than 80% of all human beings, while being orders of magnitude faster and less expensive.
A general comment on the 1839 Awards. I've noticed a trend in commentary on AI developments - not this Substack, but in general - to move goalposts as a form of, as the kids say, "cope." People will celebrate a human tricking human judges in an AI image generation competition with a "real" image, or say that current AI is garbage and overhyped because it cannot meet one specialized use case that, if they understood how the systems work, they should not expect it to be capable of. People are still joking that image generators cannot draw hands at all, but if you use a paid image model like Midjourney you know that is a solved problem - in the rare case that a human hand is not rendered correctly, a user can fix it in seconds with inpainting. A good rule of thumb: whatever media is telling you an AI system "cannot" do, check on the frontier paid models, and in the open source literature: chances are someone has worked out how to do just the thing you said is impossible (though the real question is whether it can be done consistently at scale).
I definitely believe that businesses and VCs have every motive to overhype AI, and that they are, dishonestly or honestly, doing so (it is easier to hype what you actually believe). But if the hype is "artificial general intelligence by 2030," the alternative is not "no progress at all from now to 2030." it's a vast range of possible futures, not one of which is that AI systems become less capable than they are today. The transformation doesn't come when the AI is, at economically valuable task X, better than the best human being, but when it's better than 80% of all human beings, while being orders of magnitude faster and less expensive.