8 Comments

An incredibly helpful interview. Riley gave me some good ideas for generating the kinds of materials I want to use with my own students. The way human beings cultivate certain behaviors through fine-tuning appears to destabilize the denotation of "artificial," at least in some abstract sense. There is something enigmatic and amazing about in-context learning. For my own subtack, I think I might try creating some screenshots of how ChatGPT responds to different kinds of prompts in order to help other teachers along their way. Thanks, Riley and Daniel!!!

Expand full comment
author

I'm glad you found this helpful! Not to keep plugging things, but in case It's interesting/helpful I wrote an article on in-context learning a little while ago (I'm pretty sure it's already a bit out of date): https://thegradient.pub/in-context-learning-in-context/

Expand full comment

Keep plugging things! I am using your substack / podcast as a sort of graduate seminar!!!

Expand full comment

Imagine how much more... or less nuanced this will be in ... 5 years?

Expand full comment
author

Indeed, could go either way!

Expand full comment

You didn’t mention it, but what about promoting diffusion models? I’ve been exploring this domain. I’ve found not everyone has a knack for summoning beautiful images.

Expand full comment

It’s possible this expertise might be valuable in other contexts. Or that it will “merge” with the text token engineering in some way.

Expand full comment
author

Hmm interesting, yeah I think there’s some shared substrate at least where you have the skill of refining your prompts to really “say what you mean and only what you mean” but then also the tricks eg how people figured out “unreal engine” in your prompt for high-quality images. I think there are definitely meta-skills here that do have pretty broad applicability

Expand full comment