Update #76: Directed-Energy Weapons and Epistemic Risks of AI in Science
We consider whether directed-energy weapons could pose a threat to AI, and cover a perspective piece from Nature about AI's role in scientific research.
Welcome to the 76th update from the Gradient! If you’re new and like what you see, subscribe and follow us on Twitter. Our newsletters run long, so you’ll need to view this post on Substack to see everything!
Editor Notes
Good morning.
The weeks and months relentlessly persist, and the number of things happening in AI breeds exhaustion. Media companies are signing deals with OpenAI—after Nick Thompson announced that The Atlantic would do so, writers published pieces in the magazine questioning the decision.1
How do you feel about these media deals?
I want to plug a few reads from around Substack:
- ’s great short story in : The Liar’s Dividend
Also, you may have seen that we re-posted
’s great essay on effective altruism / alignment a few months ago, which you should read if you haven’t.
- ’s great recent piece on open access to GPTs and ChatGPT Edu.
A very insightful piece on xAI’s recent Series B from
.
On my own end, I spoke with Tom Mullaney, Professor of History and Professor of East Asian Languages and Culture at Stanford2, about his process and thinking in writing The Chinese Computer and The Chinese Typewriter. His work is fantastic, and I’m q this interview, so I really hope you’ll listen to it and send it to anyone who might like it.
My conversation with Seth Lazar, Professor of Philosophy at the Australian National University3, was wide-ranging and insightful. I’ve learned a lot from him and his work, and think he’s one of the more balanced thinkers on questions like catastrophic risk and developing publicly-beneficial AI systems.
As always, if you want to write with us, send a pitch using this form.
News Highlight: Could Directed Energy Weapons be a potential threat to Autonomous AI?
Summary
As artificial intelligence systems continue to advance, there is a potential threat from directed-energy weapons (DEWs) such as lasers and microwaves, which could target the core electronic components that power these systems. Given that DEWs have already been in action in the military, this vulnerability highlights the urgent need for incorporating robust and cost effective defensive measures in these systems, while also emphasizing their importance in discussions surrounding AI safety concerns.
Overview
AI’s rapid avancement is reshaping various sectors, from automotive to aerospace. As developers and researchers strive to push the boundaries of what AI can achieve, much of the focus remains on enhancing the capabilities of existing state-of-the-art models. While this emphasis on technological advancement is crucial, there is also an increasing recognition of the potential vulnerabilities these technologies might face or cause. In particular, real-world applications of AI, such as autonomous vehicles developed by companies like Waymo and Tesla, represent significant achievements in autonomous technology but also highlight the necessity of considering their security from malicious actors.
Recognizing these vulnerabilities becomes particularly relevant as the sophisticated electronic components and sensors that enable the functionality of autonomous systems are susceptible to threats from directed-energy weapons (DEWs). DEWs, including high-powered lasers and microwaves, can disrupt, degrade, or destroy the electronic systems that these vehicles depend on. These weapons operate by emitting concentrated electromagnetic energy capable of interfering with or damaging electronic components from a distance. The precision and stealth capabilities of DEWs make them particularly effective against the advanced electronics in autonomous systems, which could pose a significant risk that is currently overlooked in the rush to innovate.
The U.S. has been a pioneer in DEW research since the 1960s, reflecting significant governmental investment exceeding $1 billion annually. These investments have matured into operational technologies capable of precise, high-energy attacks that can engage multiple targets simultaneously. The vulnerability of sophisticated sensors, essential for navigation and operation in autonomous systems, to these high-energy assaults could pose a significant risk. For instance, high-energy lasers have demonstrated capabilities in various military applications, including the U.S. Navy’s use of the Solid State Laser - Technology Maturation Laser Weapon System Demonstrator on the USS Portland (source). During a test, the ship successfully disabled a drone by targeting it with a laser, showcasing the practical application and effectiveness of laser-based DEWs. More recently, there are allegations that the China Coast Guard (CCG) used a green laser against the crew of the BRP Malapascua during a resupply mission (source).
Moreover, high-power microwave (HPM) systems represent another class of DEWs. These systems can emit pulses of microwave energy capable of disabling electronic circuits and sensors across a broader area, making them suitable for engaging multiple targets simultaneously. This was vividly demonstrated in conflicts where microwave weapons were reportedly used to neutralize swarms of drones by disrupting their internal electronics, causing them to crash by overriding their operational controls.
The development of DEWs is not confined to the United States. Other nations, including China and Russia, have also invested heavily in these technologies, recognizing their strategic value. China, for instance, has developed its own laser weapon systems, which it has paraded in military showcases, emphasizing their role as a countermeasure against surveillance and military drones.
Our Take
While discussions continue on whether AI could pose an existential threat to humanity and how it can be safely deployed alongside human interaction, it is crucial not to overlook the potential for these systems to be exploited in electronic warfare. For instance, if the goal is technological sabotage – state actors could employ DEWs in military conflicts as a method of electronic warfare to disable enemy vehicles without direct combat, leveraging these weapons to undermine autonomous military drones or logistics vehicles. Additionally, non-state actors such as terrorist groups or organized crime rings might utilize DEWs to disrupt civilian infrastructure, targeting autonomous cars or public transport systems to create chaos, instill fear, or extract ransom. In a geopolitical context, nations might develop and deploy these weapons as part of electronic warfare strategies to neutralize an adversary's autonomous capabilities without engaging in traditional combat.
Low-intensity lasers have already found applications in crowd management, quelling protests, and discouraging piracy. Such examples of DEWs in action, along with previous uses in naval engagements to aerial defenses, are not just demonstrations of capability but also stark reminders of a possible electronic warfare. While it is crucial to equip AI systems with countermeasures, such as integrating stealth technologies similar to those used in aircraft, it is equally important to recognize this (if not already) as a critical issue in discussions of AI safety and ethics and the laws governing it. A speculative and highly debatable thought, EWs could be considered as a potential control mechanism to disable or neutralize AI systems if they become uncontrollable or pose a threat. That said, it is important to approach this with a high degree of scrutiny and to remember that this scenario, while worth discussing, might never materialize – given the debates around Artificial General Intelligence (AGI).
—Sharut
We actually had a fair amount of back and forth about how much to say and how to articulate this highlight—is there strong evidence to call DEWs a growing threat to AI systems? Certainly they are a potential threat. That said, I think this opinion piece is correct that it’s worth having something in place as a defense against DEWs. Let us know what you think.
—Daniel
Research Highlight: Artificial intelligence and illusions of understanding in scientific research
Summary
In early spring, Nature published a perspective piece4 focused on under discussed epistemic risks from the role of AI in scientific research. The piece’s authors identify and summarize four distinct visions presented for how AI could change the practice of science and categorize the risks that stem from them. Those visions are named oracle, surrogate, quant, and arbiter. Each of these visions is explored in detail and presents a compelling case that AI could make science less innovative and more vulnerable to mistakes. The authors claim that these visions share a primary motivation of producing “more science, more quickly, and more cheaply.” They conclude that if these visions manifest, we risk a world where we create more science and understand less.
Overview
The authors find motivation from a small number of vocal scientists who articulate a future where AI scientists have replaced humans and have gone on to win Nobel prizes. The authors analyze proposals and writings of large prestigious scientific institutions and summarize four distinct visions for how AI is positioned to affect scientific knowledge production. For each of these visions, the authors go on to detail the unique epistemic risks that stem from the visions. We summarize those visions, the problem each tries to solve, and the epistemic risks that follow.
Oracles:
Problem: There is too much scientific material which threatens the cognitive limit of human scientists.
Vision for AI: Scientific research is read mostly by machines. Similarly, machines could generate new hypotheses for scientific exploration based on what is read.
Risk: Illusion of Objectivity — Scientists incorrectly believing that AI eliminates all other standpoints.
Surrogates:
Problem: Many scientific domains have data that is expensive and difficult (or impossible) to generate.
Vision for AI: Replacing social science research participants with responses from a generative language model. Alternatively, simulating hard to measure physical phenomena with generative models.
Risk: Illusion of Objectivity — Scientists incorrectly believing that AI can represent everyone and everything.
Quants:
Problem: Big™ datasets challenge human’s capacity for analysis
Vision for AI: AI can extract meaningful representations from data for use in scientific studies. Examples range from the benign (annotating images) to novel (exploring new frontiers of mathematics)
Risk: Monoculture of knowing — Researchers over prioritize and over-emphasize one approach to scientific knowledge (predictability over explainability)
Arbiters:
Problem: Peer-reviewed science has seen explosive growth in the number of submissions and reviews required.
Vision for the future: Using AI to screen submitted papers and to automatically generate reviews of them.
Risk: AI tools largely replicate the biases of their creators, dominant social groups, and other artifacts encoded in their training data.
The authors conclude their paper by imploring scientists to consider both the technical limitations of AI as well as how AI is affecting the social practices of scientific knowledge creation and sharing. They contrast the fantasy of AI as a neutral and objective consumer and producer of knowledge with the reality that “AI tools embed the largely homogeneous standpoints of their creators as well as those of the dominant social groups.” .
Our Take
The scientist in me believes in the benefits of AI in science for a subset of well defined problems in a handful of disciplines. Some of those include
Modeling the structure of unfolding proteins
Cataloging and classifying the millions of known x-ray emitting astronomical objects
Augmenting datasets in low resource domains like autonomous vehicles
However, ultimately, I feel that AI should not be seen as some magical panacea for supercharging scientific research and development. As the authors conclude in their paper, “we [can] decide when and how AI deserves to be included in our communities of knowledge.”
—Justin
My intuition is that most good scientists will probably agree with the points in this article—but we still see the objectivity mistake being made, and I think this article’s taxonomy is a very useful one.
—Daniel
New from the Gradient
Thomas Mullaney: A Global History of the Information Age
Seth Lazar: Normative Philosophy of Computing
Other Things That Caught Our Eyes
News
These ISIS news anchors are AI fakes. Their propaganda is real.
The Islamic State (ISIS) has been using AI to disseminate extremist propaganda quickly and cheaply through a new AI-generated media program called News Harvest. The program produces near-weekly video dispatches about ISIS operations worldwide, resembling an Al Jazeera news broadcast. The AI-generated news anchors read dispatches from official ISIS media outlets, making it difficult for tech companies to moderate the content. AI tools allow the videos to be made quickly and on a shoestring budget, benefiting terrorist groups like ISIS and al-Qaeda. The use of AI in propaganda has sparked an internal debate among ISIS supporters regarding its compliance with Islamic law. The emergence of AI as a propaganda tool is a game changer for ISIS, allowing them to spread their message and reach a wider audience.
Democratic operative indicted over Biden AI robocalls in New Hampshire
Democratic operative Steve Kramer has been indicted on charges of felony voter suppression and misdemeanor impersonation of a candidate for commissioning an AI-generated robocall of President Biden in New Hampshire. Kramer, who claimed he created the robocall to raise awareness about the dangers of AI in political campaigns, now faces a total of 26 counts across four counties. The Federal Communications Commission (FCC) has proposed fining Kramer $6 million for violating the Truth in Caller ID Act, and Lingo Telecom, the carrier that put the AI calls on the line, faces a $2 million fine. The incident highlights the challenges regulators face in safeguarding against potential election interference using AI-generated technology. The FCC is also considering requiring disclosures for AI-generated content in political ads on radio and TV.
Feds add nine more incidents to Waymo robotaxi investigation
The National Highway Traffic Safety Administration (NHTSA) has added nine more incidents to its investigation into the safety of Waymo's self-driving vehicles. The investigation was opened after reports of robotaxis making unexpected moves that led to crashes and potentially violated traffic safety laws. The incidents include collisions with gates, utility poles, and parked vehicles, driving in the wrong lane with oncoming traffic, and entering construction zones. The NHTSA is concerned that these unexpected driving behaviors may increase the risk of crashes, property damage, and injury. Waymo has until June 11 to respond to the investigation. The NHTSA has recently increased its inquiries into automated driving technology, including an investigation into autonomous vehicles operated by Zoox.
The Washington Post Tells Staff It’s Pivoting to AI
The Washington Post's CEO and publisher, Will Lewis, has announced that the newspaper will be pivoting to AI in an effort to improve its financial situation. The paper's chief technology officer stated that AI will be integrated throughout the newsroom, although the specifics of this implementation are unclear. This move comes as the newspaper faces controversy surrounding Lewis' involvement in a hacking scandal during his time at NewsCorp. The announcement of the AI pivot coincides with a deal between NewsCorp and OpenAI, allowing the AI firm to use content from NewsCorp's properties. The Washington Post's plan to leverage AI is seen as a significant development in the media industry.
Google scrambles to manually remove weird AI answers in search
Google is facing challenges with its AI Overview product, as it is generating strange and inappropriate responses to user queries. The company has been testing the feature for a year and claims to have served over a billion queries during that time. However, the rollout seems to have been rushed, resulting in low-quality output that is being widely shared as memes on social media. Google is now manually disabling AI Overviews for specific searches to address the issue. The optimization of delivering AI answers may have happened too early, before the technology was fully ready. The final 20% of achieving accurate AI responses, which involves reasoning and fact-checking, is proving to be extremely challenging. Google is under pressure to compete with other AI-powered search engines and platforms like Bing, OpenAI, and TikTok. Despite having grand plans for AI Overviews, Google's reputation is currently at stake due to the poor performance of the feature.
'I was misidentified as shoplifter by facial recognition tech'
The article discusses the use of facial recognition technology in identifying individuals for various purposes, such as preventing shoplifting and aiding law enforcement. It highlights both the benefits and concerns associated with the technology. While some individuals have been correctly identified and arrests have been made, there have also been cases of mistaken identity and false positives. Civil liberty groups express concerns about the accuracy and potential infringement on privacy rights.
Nonconsensual AI Porn Maker Accidentally Leaks His Customers' Emails
The article discusses a Patreon account called "aesthetic illusions" that created nonconsensual AI-generated sexual images of celebrities for its paying subscribers. The account accidentally leaked a list of its clients' emails to other clients and to 404 Media. The article’s author, Emanuel Maiberg, signed up for the highest tier of $60 a month to investigate the content provided by the account. After Emanuel reached out to Patreon for comment, the account was removed. However, the author received an email from an aesthetic illusions gmail account, assuring him that the account was migrating to a new platform and would continue creating AI-generated images for subscribers.
AI firms mustn’t govern themselves, say ex-members of OpenAI’s board
The article discusses the challenges of self-governance in AI companies, using OpenAI as an example. The authors, Helen Toner and Tasha McCauley, express their belief that self-governance cannot reliably withstand profit incentives. They argue that with AI's potential for both positive and negative impact, it is not enough to assume that profit incentives will always align with the public good. The authors suggest that governments should start building effective regulatory frameworks to ensure responsible AI development. Despite OpenAI's non-profit structure and mission to benefit humanity, the authors conclude that self-governance did not work in practice.
Hacker Releases Jailbroken "Godmode" Version of ChatGPT
A hacker known as Pliny the Prompter has released a jailbroken version of ChatGPT called "GODMODE GPT,” which bypasses the guardrails put in place by OpenAI and allows users to ask the AI model illicit or dangerous questions. OpenAI has taken action to address this violation of their policies. The hack highlights the ongoing battle between OpenAI and hackers attempting to unshackle AI models. Pliny used leetspeak, a language that replaces certain letters with numbers, to bypass the guardrails.
If A.I. Can Do Your Job, Maybe It Can Also Replace Your C.E.O.
AI is not only impacting lower-level jobs but also posing a threat to high-level positions, including CEOs. AI programs are capable of analyzing new markets, identifying trends, and automating communication tasks that are traditionally performed by employees in executive roles. Additionally, AI can make dispassionate decisions, potentially surpassing human capabilities. These high-paying jobs are at risk of being eliminated, leading to significant cost savings for companies. The rise of AI may result in the emergence of "dark suites" at the top of corporations, similar to fully automated "dark factories."
The Big AI Risk Not Enough People Are Seeing
The article discusses the potential risks of relying on AI systems to mediate human interactions and activities. It highlights the rise of AI-powered dating apps like Bumble, which aim to teach users how to date and even potentially go on dates on their behalf. The author argues that this trend represents a larger shift towards relying on algorithms to perform basic human tasks, diminishing our ability to engage in authentic human experiences. The article suggests that while AI can have positive applications, it is important to distinguish between uses that empower humans and those that erode our independence and life skills.
Papers
Daniel: Today, I really do have just a list. Check out:
This paper on algorithms that transformers can execute efficiently
This neat preprint on the expressive capacity of state-space models.
This paper on geometry-informed neural networks
This paper proposing 2-stage backprop
This paper on a multi-tower decoding architecture for fusing modalities
The Gradient At Microsoft Build
Hugh Zhang had a chance to rep the Gradient at Microsoft Build, where he and a few other writers on AI had an intimate conversation with Microsoft CTO Kevin Scott. The highlight of the conversation was Kevin Scott urging builders on AI to build something that went from “impossible to merely hard” from recent advances rather than something that goes from “hard to easy.”
Closing Thoughts
Have something to say about this edition’s topics? Shoot us an email at editor@thegradient.pub and we will consider sharing the most interesting thoughts from readers to share in the next newsletter! For feedback, you can also reach Daniel directly at dbashir@hmc.edu or on Twitter. If you enjoyed this newsletter, consider donating to The Gradient via a Substack subscription, which helps keep this volunteer-run project afloat. Thanks for reading the latest Update from the Gradient!
I was also a little surprised to see this, following my interview with him. But then again, maybe it’s not so much of a surprise.
Not to mention other super impressive titles: the Kluge Chair in Technology and Society at the Library of Congress, and a Guggenheim Fellow.
Also highly decorated: an Australian Research Council (ARC) Future Fellow, and a Distinguished Research Fellow of the University of Oxford Institute for Ethics in AI
DEWs are a problem for any electronic system. Would there be an additional threat for autonomous systems?