4 Comments
Mar 28, 2023Liked by daniel bashir

I lead an ML team (applied, not research). Obviously these models are amazing, but also so underwhelming. Every time I’ve asked any of these models to do anything other than summarize general knowledge it was trained on it fails. For example, I thought maybe GPT 3.5 could help me in my literature reviews and summarize papers for me. Even with not very technical papers at best I got a bad summary of the abstract. A high percentage of the time I got back pure hallucination. Sparks of AGI is typical big tech hype.

Expand full comment
author

Hey Matt, thanks for sharing your experience here. I definitely agree that at least right now, these systems are (very) imperfect tools. The generative capabilities of the pre-trained base models can handle nuance to an extent (at least in anecdotes I've heard), but I think only when that nuance already exists as an artifact of existing discourse. I like the way Ted Underwood conceives of these systems as models of culture, e.g. here: https://tedunderwood.com/2021/10/21/latent-spaces-of-culture/

Expand full comment

I heard a great Bengio interview over the winter where he (much more expertly of course) confirmed my thinking that big data models yielding AGI or anything like it is basically laughable (not his word!). We'll need at least one more big breakthrough. Which hell, could come very quickly. But we're not getting not-irresponsible search integration without it.

Expand full comment
author

Yeah, I’m sympathetic to this viewpoint and you’ll certainly find plenty of others echoing the statement that this is a fundamentally wrong (though useful!) direction

Expand full comment