TL;DR: Thank you all for joining or sticking with us through another year. I wanted to take a moment to re-introduce who we are and what we’re doing. We’d also love to solicit your feedback—please take a moment to fill out this form whenever you have a chance, to let us know a bit more about you and what you’re looking for from The Gradient.
Intro
Friends, subscribers, however you prefer to be addressed:
Happy new year! We’ve been in your inbox a lot this year, I think. According to Substack, more than half of you joined us this year, and we’re really thankful to have you with us.
For those of you less familiar with our work, The Gradient started out as a digital magazine: a platform to host essays that would ground the discussion of a world-changing technology in technical facts. Our founding editors wrote these words in an Editors’ Note in 2018, and they still carry some truth:
[W]hile the popular media’s interest in intelligent algorithms has been growing, its reporting often loses touch with reality, falling back to science-fiction tropes and sensationalism. Real and pressing issues in the field are being ignored, while the imagined and contrived are consistently overblown.
In addition to our online magazine, where we continue to publish essays from the community, we began publishing a newsletter and podcast in 2021. We also host a Mastodon instance called Sigmoid Social for the AI community. As a quick recap, this year:
We’ve worked with writers to publish 17 articles in our magazine
We’ve released 26 editions of our Update newsletter
We’ve released 51 episodes of The Gradient Podcast
Our Mission
I don’t even need to say that 2023 was a big year for AI. While increased interest in the field is welcome, it’s a double-edged sword. We’ve seen breathless coverage of nearly every nook and cranny of AI, from the “emergent capabilities” of language models to the subcultures among AI researchers and professionals.
Covering these issues in a way that is accessible and avoids hype is tricky. Non-technical coverage of AI can tend towards vague proclamations about its abilities without technical grounding, while even more technical coverage can fall prey to its own biases (Microsoft’s “Sparks of AGI” paper, while grounded in experiments, seemed to encourage a fair amount of ungrounded speculation—it’s not obvious that LLMs can, at present, reason and plan, for instance).
We still aim to offer sober, sophisticated reporting on the latest developments in AI research, but we see our mission as more than that. We aim to publish about issues that are fundamental, impactful, both timely and timeless.
The Gradient is a non-profit, volunteer-run organization. We’re supported by subscribers like you, and the money we receive currently goes towards upkeep of thegradient.pub, hosting Sigmoid Social, subscriptions for the software we use to produce our content, and ad hoc expenses.
That we are a non-profit, and are not particularly focused on growth, has an important impact on our editorial decisions. We see our online magazine as a platform that welcomes perspectives and research overviews from anyone in the community who is willing to work with us to write something of value to our audience. We’ve published essays that provide technically grounded perspectives on interpretability and explainability, We’ve considered linguistic perspectives on language models—we published an essay in our magazine, and I spoke with Tal Linzen on how psycholinguistics and deep learning inform each other.
Similarly, our podcast and newsletter are independent, and we make editorial decisions based on what we think will be most interesting and valuable to our readers and listeners. We are only a few people, and so those decisions are inevitably subjective, but we’re more interested in edification than growth. Our Update #63 newsletter was released just after Sam Altman’s initial firing from OpenAI—I made the decision not to cover the news and add to the flood of information many of you have undoubtedly been inundated with already.
Our Audience (you!)
You all come from a variety of backgrounds: some of you are academics; some of you are software engineers; some of you work in non-technical roles and have varying levels of background in AI; some of you are just interested in the technology. We recognize that not all of our content is going to hit the mark for everyone. But we hope that, if you’re reading this, we’ve published something that has made you think or taught you something new.
This being said, we do want to know more about all of you and what you’re interested in. We love hearing from you all, and want to make sure we’re delivering content and essays that you find valuable (after all, we are a bunch of kids in a slack room who get really excited when you say nice things about us).
To that end, let’s try this poll thing again—since Substack only allows multiple-choice polls, I’m going to try this as a Google form. If you have a few moments, please fill out this form. It’ll help us a lot.
Closing and Highlights
We’re looking forward to 2024, and hope to continue delivering on why The Gradient started in the first place: AI is a world-changing technology, and discussions of its capabilities and implications need to be technically grounded; these discussions also require input from a variety of fields. I’ll point you to a few things we’ve produced this year that we think you should read or listen to, if you haven’t already. I’ll limit myself to three for each of the magazine, newsletter, and podcast, acknowledging that there are many more that should be included:
Magazine
Petar Veličković’s overview article on Neural Algorithmic reasoning.
Kenneth Li’s article on OthelloGPT, exploring whether large language models merely memorize training data, or whether they build internal world models.
Arjun Ramani and Zhengdong Wang’s collection of technical, social, and economic arguments that the path to transformative AI is not straightforward.
Newsletter
Our Update #49 covering fundamental limitations of alignment in LLMs, and regulatory divergence.
Our Update #52, discussing the ironies in pausing AI and fine-tuning LLMs without backpropagation.
Our Update #44, covering challenges for personal robotics and cheap methods for poisoning web-scale datasets.
Podcast
My conversation with Terry Winograd, a legendary pioneer in artificial intelligence, who built one of the first programs for natural language processing, among many other achievements. This conversation challenged how I think about some of AI’s more fundamental questions, and grounded a number of narratives and debates in AI’s history.
My conversation with Ken Liu, a brilliant science fiction author who has deeply considered what it is to be a technologist, and the back and forth between the technologies we build and the stories we tell—his science fiction has changed how I think about technology, and this conversation seriously impacted how I think about technology and our collective future.
My conversation with Sewon Min, a PhD student at the University of Washington whose thoughtful approach to language modeling pushed me to think more deeply about a number of phenomena.
Bonus (yes, I’m cheating): a review of the year’s AI progress, with Nathan Benaich, whose State of AI Report continues to provide an important service to the community.
Now, as it was five years ago, the shape of our technological future is one of the most important stories of our day. That collective future is something we should all feel some responsibility towards, and it’ll impact us all in different ways. This requires thoughtfulness, balance, and variety. Join us in democratizing and demystifying that future.
Write with us; consider joining our team; let us know what you think we should be covering. You can reach us at editor@thegradient.pub, and we (or I, at least) read all of your comments.
— Daniel and the editorial team