Yudesign/Dreamstime
Artificial intelligence and machine learning are being embedded in more and more of the digital and physical products and services we use everyday. Consequently, bad actors—cybercriminals, terrorists, and authoritarian governments—will increasingly seek to make malicious use of A.I. warns a new report just issued by team of researchers led by Miles Brundage, a research fellow at the Future of Life Institute at Oxford University and Shahar Avin, a research associate at the Centre for the Study of Existential Risk at Cambridge University.
The new report "surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats." The researchers specifically look how A.I. in the next five years might be misused to cause harm in the digital, physical, and political realms. They also suggest some countermeasures to these potential threats.
The researchers begin by observing that artificial intelligence (A.I.) and machine learning (M.L.) are inherently dual-use technologies, that is, they can be used to achieve both beneficial and harmful ends. One retort is that this is a fairly trite observation since there are damned few, if any, technologies that are not dual-use, ranging from sharp sticks and fire to CRISPR genome editing and airplanes.
That being said, the researchers warn that A.I. and M.L. can excerbate security vulnerabilites because A.I. systems are commonly both efficient and scalable, that is, capable of being easily expanded or upgraded on demand. They can also exceed human capabilities in specific domains and, once developed, they can be rapidly diffused so that nearly anyone can have access to them. In addition, A.I. systems can increase anonymity and psychological distance.
The authors lay out a number of scenarios in which A.I. is maliciously used. For example, A.I. could be used to automate social engineering attacks to more precisely target phishing in order to obtain access to proprietary systems or information. They suggest that it will not be too long before "convincing chatbots may elicit human trust by engaging people in longer dialogues, and perhaps eventually masquerade visually as another person in a video chat."
In the physical realm they outline a scenario in which a cleaning robot, booby-trapped with a bomb, goes about its autonomous duties until it identifies the minister of finance who it then approaches and assassinates by detonating itself. Assassins might also repurpose drones to track and attack specific people. Then there is the issue of adversarial examples, in which, objects like road signs could be perturbed in ways that fool A.I. image classification, e.g., causing a self-driving vehicle to misidentify a stop sign as a roadside advertisement.
A.I. could be used by governments to suppress political dissent. China's developing dystopian social credit system relies upon A.I. combined with ubiquitous physical and digital surveillance to minutely control what benefits and punishments will be meted out to its citizens. On the other hand, disinformation campaigners could use A.I. to create and target fake news in order to disrupt political campaigns. A.I. techniques will enable the creation of believable videos in which nearly anyone can be portrayed as saying or doing almost anything.
What can be done to counter these and other threats posed by the malicious use of A.I.? Since artificial intelligence is dual-use, A.I. techniques can be used to detect attacks and defend against them. A.I. is already being deployed for purposes such as anomaly and malware detection. With regard to disinformation, the researchers point to efforts like the Fake News Challenge to use machine learning and natural language processing to combat the fake news problem.
The researchers also recommend red teaming to discover and fix potential security vulnerabilities and safety issues; setting up a system in which identified vulnerabilities are disseminated to the A.I. researchers, producers, and users with an eye to "patching" them; offering bounties for identifying vulnerabilities; and creating a framework of sharing information on attacks among A.I. companies analogous to Information Sharing and Analysis Centers in the cyber domain. The report concludes: "While many uncertainties remain, it is clear that A.I. will figure prominently in the security landscape of the future, that opportunities for malicious use abound, and that more can and should be done."
A just released report by the cybersecurity firm McAfee and the Center for Strategic and International Studies estimates that cybercrime cost the global economy $600 billion last year. Calculating how much the infotech has boosted the world's wealth is difficult, but one recent estimate suggests that digital technologies increased global GDP by $6 trillion. Another report predicts that A.I. will contribute as much as $15.7 trillion to the world economy by 2030. Clearly, with so much wealth at stake, it will be worth it for folks to develop and invest in effective countermeasures.
The New York Times reports the more sanguine take of digital technologist Alex Dalyac:
Some believe concerns over the progress of A.I. are overblown. Alex Dalyac, chief executive and co-founder of a computer vision start-up called Tractable, acknowledged that machine learning will soon produce fake audio and video that humans cannot distinguish from the real thing. But he believes other systems will also get better at identifying misinformation. Ultimately, he said, these systems will win the day.
Considering that humanity has so far wrung far more benefits than harms from earlier dual-use technologies, it's a good bet that that will also happen with A.I.
Comentários