Gradient Update #9: Bias Bounties and Hierarchical Architectures for Computer Vision
In which we discuss a the first bias bounty competition by Twitter and a new hierarchical multi-layer perception architecture for computer vision.
Welcome to the ninth update from the Gradient! If you were referred by a friend, subscribe and follow us on Twitter!
News Highlight: The first Bias Bounty Challenge
This news edition’s story is Sharing learnings from the first algorithmic bias bounty challenge.
Summary Twitter’s algorithmic bias bounty challenge, the first of its kind, recently concluded. While users had previously found the algorithm had a racial bias, the bounty uncovered a number of other biases and potential harms. For instance, the winning submission “used a counterfactual approach to demonstrate that the model tends to encode stereotypical beauty standards, such as a preference for slimmer, younger, feminine, and lighter-skinned faces,” while the most innovative submission demonstrated that the algorithm prefers lighter-skinned emojis.
In their post, Twitter engineers share the difficulty of coming up with an appropriate grading rubric that would allow them to compare submissions while encompassing a variety of harms and allowing for creativity. The challenge received submissions from an array of participants around the world, from universities and startups to enterprise companies. It revealed that there appeared to be bias embedded in Twitter’s saliency model and that these biases are often learned from training data: the model was trained on open-source human eye-tracking data.
Background Many websites, Twitter included, automatically crop photos that users upload. Twitter’s method for cropping uses a saliency algorithm to determine which parts of a photo are most important. As a number of users found in late 2020, there appeared to be a bias in the system: it was cropping out certain users more frequently than others. A bias assessment from Twitter confirmed the issue. In August 2021, Twitter held the first algorithmic bias bounty challenge, inviting the community to probe their algorithm for yet undiscovered biases and harms.
Why does it matter? Prizes seem to create a great incentive to get smart folks working on issues like resolving bugs or security deficiencies. Competitions like this one might be able to attract a lot of smart people to solve problems like identifying potential biases in machine learning systems, helping us better understand how things can go wrong when those systems are deployed in ways we might or might not expect.
Editor Comments
Daniel: I’m pretty sure this is the first time we’ve seen a challenge like the one Twitter has proposed. It’s a big step for a company to expose an algorithm to scrutiny like this--we’ve seen calls for organizations like Facebook or YouTube to expose their algorithms for similar purposes, but those seem unlikely. People like AI2’s CEO Oren Etzioni have written in favor of algorithmic auditing, which would allow an AI system to be queried externally with hypothetical cases. It’s possible that this could help interrogate systems for bias without exposing proprietary information, but I haven’t seen much in the way of concrete movement towards putting ideas like this into practice. I’d be very excited to see more bounties like Twitter’s, but am skeptical that many tech companies will willingly expose their algorithms in a similar way.
Hugh: I think prize competitions in general are a great way to spur the community to work towards fixing important issues. Bounties for security vulnerabilities and other bugs have been around for decades, and it’s nice to see that a similar approach is working for “bugs” in machine learning.
Andrey: I’ve been following this story pretty closely, and have been quite excited by it. The idea of adapting the often-used concept of bug bounties to address issues in AI makes a lot of sense, and the success of this competition demonstrated empirically that it can work. It also showcases the strength of Twitter’s AI ethics team, which handled the cropping controversy very well on their own and then went on to build this challenge around it. I am hopeful other companies take note of this approach to AI ethics and also attempt the bias bug bounty idea.
Paper Highlight: ConvMLP: Hierarchical Convolutional MLPs for Vision
This news edition’s story is ConvMLP: Hierarchical Convolutional MLPs for Vision
Summary University of Oregon researchers “propose ConvMLP: a hierarchical Convolutional MLP for visual recognition... a light-weight, stage-wise, co-design of convolution layers, and MLPs.” It was designed to be “scalable and seamlessly deployed on downstream tasks like object detection and semantic segmentation” achieving near state-of-the-art (SOTA) classification accuracy on ImageNet. Additionally, it showed great promise on other downstream tasks such as semantic segmentation, object detection, and transfer learning.
Background There are a few novelties to the methods here which demonstrate significant value beyond some great metrics. One of them was the choice of channel MLPs, a departure from the design choice of other notable SOTA MLP models such as MLP-MIXER. By choosing channel-wise MLPs, certain layers can share weights leading to a reduction in the overall dimensionality and number of trainable parameters.
Another novelty can be seen in how the researchers navigate issues such as the fixed dimensionality that is typically a feature of MLPs. By introducing a “convolutional tokenizer”, they can extract a fixed dimensional initial feature map for all images regardless of input size. Subsequently, the “visual representation learned by ConvMLP can be seamlessly transferred and achieve competitive results with fewer parameters”.
Editor Comments
Justin: In our community's quest to find an almighty model, many researchers achieve SOTA results through variations of brute force; either more data or parameters. While brute-forcing our way through 175 billion parameters or 18 million labeled images has led to great outcomes for billion-dollar corporations, it has failed spectacularly along other axes given the unprecedented amount of cash, private data, and burnt fossil fuels needed to get there. Given the community's needs for long-term sustainable model development, I find it extremely refreshing to see such great results come coupled with a significant reduction in the amount of parameters to get there. For those keeping score at home, ConvMLP was able to eke out 4 additional accuracy points over MLP-Mixer while using 17 million fewer (30%) parameters.
Andrey: These kinds of works are exciting to me, as they are examining the underlying assumption of how we build models, instead of just introducing tweaks of known models for particular contexts. The Transformer has clearly demonstrated that there may still be powerful building blocks for deep neural networks we have not yet found, and the popularity of research looking into that (this work included).
New from the Gradient
The Imperative for Sustainable AI Systems
Sergey Levine on Robot Learning & Offline RL
Jeremy Howard on Kaggle, Enlitic, and fast.ai
Evan Hubinger on Effective Altruism and AI Safety
News
Hobbling Computer Vision Datasets Against Unauthorized Use Researchers from China have developed a method to copyright-protect image datasets used for computer vision training, by effectively ‘watermarking’ the images in the data, and then decrypting the ‘clean’ images via a cloud-based platform for authorized users only.
Tesla is ordered to turn over Autopilot data to a federal safety agency The main federal auto safety agency has ordered Tesla to hand over a trove of data on its Autopilot driver-assistance system as part of an investigation into Tesla cars crashing into fire trucks or other emergency vehicles parked on roads and highways.
Bias persists in face detection systems from Amazon, Microsoft, and Google Commercial face-analyzing systems have been critiqued by scholars and activists alike over the past decade, if not longer. A paper last fall by the University of Colorado, Boulder researchers showed that facial recognition software from Amazon, Clarifai, Microsoft, and others was 95% accurate for cisgender men but often misidentified trans people.
Deepfakes in cyberattacks aren’t coming. They’re already here. In March, the FBI released a report declaring that malicious actors almost certainly will leverage “synthetic content” for cyber and foreign influence operations in the next 12-18 months. This synthetic content includes deepfakes, audio, or video that is either wholly created or altered by artificial intelligence or machine learning to convincingly misrepresent someone as doing or saying something that was not actually done or said.
Papers
Challenges in Detoxifying Language Models Large language models (LM) generate remarkably fluent text and can be efficiently adapted across NLP tasks. Measuring and guaranteeing the quality of generated text in terms of safety is imperative for deploying LMs in the real world; to this end, prior work often relies on the automatic evaluation of LM toxicity. We critically discuss this approach, evaluate several toxicity mitigation strategies with respect to both automatic and human evaluation, and analyze the consequences of toxicity mitigation in terms of model bias and LM quality.
The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning In complex systems, we often observe complex global behavior emerge from a collection of agents interacting with each other in their environment, with each individual agent acting only on locally available information, without knowing the full picture. Such systems have inspired the development of artificial intelligence algorithms in areas such as swarm optimization and cellular automata.
A Recipe For Arbitrary Text Style Transfer with Large Language Models In this paper, we leverage large language models (LMs) to perform zero-shot text style transfer. We present a prompting method that we call augmented zero-shot learning, which frames style transfer as a sentence rewriting task and requires only natural language instruction, without model fine-tuning or exemplars in the target style.
Textless NLP: Generating expressive speech from raw audio Text-based language models such as BERT, RoBERTa, and GPT-3 have made huge strides in recent years. When given written words as input, they can generate extremely realistic text on virtually any topic. In addition, they also provide useful pre-trained models that can be fine-tuned for a variety of difficult natural language processing (NLP) applications, including sentiment analysis, translation, information retrieval, inferences, and summarization, using only a few labels or examples (e.g., BART and XLM-R).
Tweets
Closing Thoughts
Have something to say about this edition’s topics? Shoot us an email at gradientpub@gmail.com and we will consider sharing the most interesting thoughts from readers to share in the next newsletter! If you enjoyed this piece, consider donating to The Gradient via a Substack subscription, which helps keep this grad-student / volunteer-run project afloat. Thanks for reading the latest Update from the Gradient!