skip to Main Content

AI and the risks related to forgery, from documents to identities

AI And The Risks Related To Forgery, From Documents To Identities

The Artificial Intelligence (AI) use is increasing world wide, not only for “good” purposes. Oren Etzioni published an article on the Harvard Business Review on the risks posed by the AI for democracy, security, and society. Especially on forgery

Artificial Intelligence (AI) use is increasing world wide. Not only in cyber security, but also in many aspects of our real life. Some of the Ai applications are good, because they improve the experts work and the industrial processes. But others increase dangers to our security. Oren Etzioni published an interesting article on the Harvard Business Review on the risks posed by the AI use for forgery of documents, pictures, audio recordings, videos, and online identities. This technology will ease the processes, making them high-fidelity, inexpensive and automated, leading to potentially disastrous consequences for democracy, security, and society. A concrete example happened in February, when OpenAI announced GPT-2, an AI generator of text so seemingly authentic that they deemed it too dangerous to release publicly for fears of misuse. Unfortunately, even more powerful tools are sure to follow and be deployed by rogue actors.

The AI expert: Automated forgery is already prevalent on social media. In the international digital world legislation has limited practical impact and the problem extends far beyond bots

According to Etzioni, “automated forgery is already prevalent on social media, as we witnessed during the 2016 U.S. elections. Twitter has uncovered tens of thousands of automated accounts linked to Russia in the months preceding the 2016 election, according to The Washington Post. Facebook estimated that fake news spread by Russian-backed bots from January 2015 to August 2017 reached potentially half of the 250 million Americans who are eligible to vote. I have called for regulations requiring bots to disclose they are not human, and the state of California introduced a corresponding law that will take effect in July, 2019. This is a valuable step, but in the international digital world legislation has limited practical impact. The problem”, reports the Harvard Business Review, “extends far beyond bots. Doctored images are commonplace, and recent advances in image processing have enabled the creation of realistic fake video”.

The “deepfakes issue”

“Then came ‘deepfakes,’ AI-generated videos of entirely new facial expressions of a target person created by stitching together two faces in an eerily convincing way. This face-swapping technology is sufficiently available, and it has started appearing in pornography, with several high-profile celebrities‘ faces added to pornographic videos. A viral video of Obama issuing a warning about deepfakes was, itself, a fake. When attempting to decide whether an item is genuine, it’s natural to consider its source. Yet, it turns out that a website, an e-mail address, and even the origin of a phone call can be easily faked or ‘spoofed’. I found this out the hard way – explained the cyber security expert -, when my phone rang, and I looked at the caller id—only to find out that seemingly I was calling myself!”

The Artificial Intelligence will change the phishing cyber attacks

“The  adage ‘on the Internet, nobody knows you’re a dog’ implies that you cannot be certain of the author or origin of most items you receive via email, through social media, or even by phone,” Etzioni added. “This Internet blindness is the basis for ‘phishing’ — cyber- attacks where a communication purporting to be from a trusted source induces you to reveal private information such as a password or credit card number. Today, the text of automatically-generated phishing e-mails is easy to spot as phony, but AI is about to change that. Historically, society has relied on signatures to ensure authenticity. The Sumerians used signatures over 5000 years ago with intricate seals stamped in clay tablets to endorse their writings. Marks, stamps, and seals evolved into handwritten text as literacy became widespread, and references to signing documents appear throughout history”.

Digital signatures and AI 

“On the Internet, we rely on digital signatures. A digital signature is a computer method (based on cryptography) of ensuring that an item wasn’t tampered after it was signed,” the cyber security expert underlined. “Services like DocuSign certify contracts using digital signatures. Automated messages between websites can also be authenticated by digital signatures, but digital signatures are not widely used to certify the authorship of e-mails, social media posts, images, videos, etc. The specter of AI forgery means that we need to act to make digital signatures de rigueur as a means of authentication of digital content”. 

The Etzioni solutions are based on digital signatures: these have to be certified and verified. Any item that isn’t signed has to be considered potentially forged

The AI problem for Etzioni is really dangerous, but there are some solutions to counter this phenomenon. “First, we need to certify signatures, which can be done by central authorities, or via more democratic computer methods such as encryption and blockchain,” is written on the Harvard Business Review. “Second, we need to make the acts of signing and verifying signatures as seamless as possible. Signing should be enabled by default in our email software, word processor, smartphone cameras, and in any production of digital content. Our browsers, social-media applications, and other media-reading software should highlight whether content is signed, and by whom. Finally, and perhaps most challenging, we need to promulgate the norm that any item that isn’t signed is potentially forged. We don’t accept checks that aren’t signed—the same should hold for digital content.”

The AI expert: We need to jumpstart ‘zero click’ digitally-signed emails, social-media posts, documents, images, videos, and even phone calls before it’s too late

“We want to preserve the option of anonymity so that digital signatures aren’t used to suppress dissent or discourage whistle blowers. Moreover, we want to allow for pseudonyms so that an author can choose to hide their identity but still be recognized as a particular individual or organization,” Etzioni concluded. “Digital signatures will not prevent a bot from masquerading as some person, but will stop the bot from impersonating you, and from disseminating content that you didn’t author in your name. The computer methods to support reliable digital signatures exist, but are not seamless enough for ubiquitous use. We need to jumpstart ‘zero click’ digitally-signed emails, social-media posts, documents, images, videos, and even phone calls before it’s too late.”

Oren Etzioni is the Chief Executive Officer of the Allen Institute for Artificial Intelligence. He is also a Professor at the University of Washington’s Computer Science department and a Venture Partner at the Madrona Venture Group.

Back To Top