Discussion about this post

User's avatar
Matt's avatar

I lead an ML team (applied, not research). Obviously these models are amazing, but also so underwhelming. Every time I’ve asked any of these models to do anything other than summarize general knowledge it was trained on it fails. For example, I thought maybe GPT 3.5 could help me in my literature reviews and summarize papers for me. Even with not very technical papers at best I got a bad summary of the abstract. A high percentage of the time I got back pure hallucination. Sparks of AGI is typical big tech hype.

Expand full comment
3 more comments...

No posts