Imitation of power: How a fake president could authorize a nuclear strike

Published
Technology expert and columnist for Kursiv
The statue of the captain from Kopenick in front of the town hall / Photo: Shutterstock

People are keen to trust people in uniforms. In the age of artificial intelligence and fake images, it might be too dangerous. Nikolai Legkodimov, a tech expert, shares his thoughts about these risks in his op-ed.

Near the home stadium of the German football club “Union”, famous among other things for the fact that during the time of divided Germany, the entire team fled from the GDR to the FRG, is the Köpenick town hall. In front of the town hall entrance, there is a small statue of a mustachioed man in the uniform of a Prussian army officer who ingeniously robbed the city treasury in the early 20th century.

A shoemaker by profession and a professional criminal by vocation, Wilhelm Voigt was born in what is now Russia’s Kaliningrad region, and then East Prussia. His criminal and not always successful career was quite diverse, but he became famous as an almost sixty-year-old man. In the fall of 1906, Voigt disguised himself as an army captain and, using social engineering methods, as we would say today, convinced the guards he met on the way to arrest the city authorities of Köpenick for bribery. The soldiers did not find this accusation incredible, carried out the order of the “commander” and allowed him to “requisition” the city treasury. Neither the soldiers themselves, nor the officials, nor the passers-by doubted the authority and legality of the actions of the “captain” who gave orders throughout the procedure. After the circumstances of the case were revealed, the German public actively discussed how uncritically citizens perceive the orders of any person in uniform.

A weak link in the chain

As neuroscientists teach us, man is in principle a hierarchical being, and his behavior towards “seniors” in rank, status, authority, and physical strength is encoded not only and not so much by culture and upbringing, but directly at the level of the nervous system and the mechanisms that control it. It is not surprising that at the beginning of the month, a Hong Kong bank employee allowed the “treasury” of his bank to be “requisitioned” by a new “captain from Köpenick” generated with the help of AI – or rather, “captains”. Using AI tools that are widely available today, the attackers created a “deep fake” video call with a number of status faces for the unfortunate banker and convinced him to transfer several tens of millions of dollars to the desired account.

In the world of information security, the statement that “the weakest link in any security system is the human being” has long been a recognized commonplace. As always, technology only makes it easier to exploit the weakest link. Even the founder (at least in popular literature) of modern “hacking” Kevin Mitnick, although he was a talented software engineer, always focused his “work” on social engineering, namely on exploiting the trust of people working with systems.

But the new ways of “gaining trust” with the help of AI have a disturbing effect of scale, especially in matters related to politics and government. It seems like a clever scam when several fake bank “big shots” instruct a manager to make a payment, but the possibility for a fake «president» of the country to “address” the nation becomes cardinally dangerous. US President Reagan’s joke (accidental or deliberate) about the start of the bombing of the USSR, which aired on the radio in 1984, at the height of the Cold War, thrilled the people of both countries.

New reality

Automatic calling services, the “new radio”, reported in the voice of President Biden in 2024 to the residents of New Hampshire about the undesirability of voting in the presidential “primaries”. The technologies of impersonating political leaders in particular will only expand the scope of application – the former Prime Minister of Pakistan Imran Khan, who is now in prison, is already addressing his supporters with the help of an AI-generated voice.

The issues of unambiguous identification of the interlocutor for everyday life (phone and video calls, various kinds of messengers) and business one (banking apps etc.) will be closed quite quickly. Today’s obvious problems, such as the vulnerability of customer facial identification with an identity document shown in the frame, will also be relatively easy to solve. Most likely, the emphasis will be made on “sustainable”

biometrics of the user, which is not yet subject to falsification with the help of GenAI, such as fingerprints or iris pattern, since neither voice nor visual image of the interlocutor can be trusted anymore.

Gradually, the new reality will penetrate our everyday habits. Refusal to communicate with any insufficiently identified subscriber will most likely be embedded in the means of communication in the same way as today’s protection against spam or unwanted calls is embedded in our email and smartphones, and calls from unfamiliar numbers will finally become a thing of the past in favor of calls only from “verified”, like authors on Twitter, subscribers.

Much more complicated will be the verification of messages from politicians in today’s highly multi-channel communication world, where any public figure communicates with voters through Twitter (and some, like Trump, even through their own platform). Introducing a conditional KYC for each public message of a politician or a well-known opinion leader is impractical (and most likely impossible). At the same time, the impact of an AI-generated video “address” by a head of state about launching a nuclear strike against an adversary will, in its informational effect, cross out all efforts of officials to refute it – let alone the fact that these “officials” could also be “captains from Köpenick” generated in AI.

Read also