Announcing the Gradient Prize Winners!
Highlighting the best Gradient articles of the last several months that you should definitely check out
Earlier this year, we announced the inaugural Gradient Prize to reward our most outstanding contributions from the past several months. Gradient editors picked five finalists, which we sent to our three guest judges: Chip Huyen, Shreya Shankar, and Sebastian Ruder. Without further ado, here are the prize results!
Winner
The Imperative for Sustainable AI Systems by Abhishek Gupta of the Montreal AI Ethics Institute
Chip on why she chose this to be the winner:
The article discusses one type of impact ML models have on society that doesn’t get talked about nearly as much as other types of societal impacts. I love how the author was able to break down three major challenges of the current paradigm, from the more obvious challenge like the exploitative data practice to the less obvious but equally worrisome challenge like the centralization of power. The author proposed actionable items to counter these challenges; some of them might go against the current trends but can provide encouraging signals for researchers who pursue less traveled paths.
Runners-up
Machine Translation Shifts Power by Amandallyne Paullada
Shreya on this article:
For 1, I enjoyed learning about the history of translation's impact on power. In the early days, those who invested most in machine translation reaped most of the benefits. It was interesting to read about how machine translation is used as a barrier, rather than break down language barriers between groups of people. The piece was strongly opinionated, cited many works, and proposed new ideas to think about related to power dynamics and the future of machine translation.
And Sebastian on it:
This powerful eloquent essay provides a nuanced account of the benefits and risks of machine translation (MT) technology, embedded in historical context. It touches on important issues that go beyond those discussed in current literature, including MT deployment in high-stakes scenarios such as police–civilian encounters and what it means to 'own' a language. The article aptly illustrates that issues around the use of such technology are often not black or white; for instance, while machine translation of scientific literature can help disseminate knowledge, it necessitates an increased awareness of the limitations of automatically translated scientific text. It also reminds us acutely of the Western-centricity of language technology development, which typically differentiates between 'high-resource' and 'low-resource' languages, despite the wealth of cultural resources available in the latter. Overall, the article is a powerful call to consider the societal implications of not only machine translation but language technology in general and urges us to be aware not only of the shortcomings of individual methods but also where the paradigm as a whole may fall short.
Machine Learning Won't Solve Natural Language Understanding by Walid Saba
From the article:
This misguided trend has resulted, in our opinion, in an unfortunate state of affairs: an insistence on building NLP systems using ‘large language models’ (LLM) that require massive computing power in a futile attempt at trying to approximate the infinite object we call natural language by trying to memorize massive amounts of data. In our opinion this pseudo-scientific method is not only a waste of time and resources, but it is corrupting a generation of young scientists by luring them into thinking that language is just data – a path that will only lead to disappointments and, worse yet, to hampering any real progress in natural language understanding (NLU). Instead, we argue that it is time to re-think our approach to NLU work since we are convinced that the ‘big data’ approach to NLU is not only psychologically, cognitively, and even computationally implausible, but, and as we will show here, this blind data-driven approach to NLU is also theoretically and technically flawed.
Finalists
It’s All Training Data by Yim Register
As machine learning scientists, we know that training data can make or break your model. Where you got the data from, how biased it is,  when it was sampled, how it was categorized. We think about each of these questions when trying to make a generalizable model of the world. So why don’t we apply the same concerns to our personal histories? So often we give disproportionate weight to the voice in our head that says we aren’t good enough. We do this unconsciously, without ever investigating where the data came from, or bothering to update the database.
Justicia Ex Machina: the Case for Automating Morals by Rasmus Berg Palm and Pola Schwöbel
In the following post, we will discuss two complex and related issues regarding these models: fairness and transparency. Are they fair? Are they biased? Do we understand why they make the decisions they make? These are crucial questions to ask if machine learning models play important parts in our lives and our societies. We will contrast this to humans who are the decision-makers these models are assisting or outright replacing.
Conclusion
Want to help us give out more monetary rewards for our authors? Consider becoming a supporter on Substack! Here’s a special offer just for this occasion:
As always, we’re thrilled with the quality of contributions to The Gradient; if you haven’t had a chance to catch up with these recent pieces, we highly recommend you take a look!
And if you can’t afford to help us monetarily, consider sharing The Gradient with friends!