Digital Deception: The Importance of AI Deepfake Regulation
Cara Leonard
October 28, 2024
Seminar in Science Writing
"Digital Deception: The Importance of AI Deepfake Regulation"
Have you ever watched a video that made you question what’s real? Imagine a deep fake of a politician committing illicit acts, or a beloved celebrity caught in a fake scandal. With AI advancing so quickly, these scenarios are becoming possible. Without regulations, we risk a future filled with misinformation, privacy violations, and criminal activity. In fact, some extreme situations could arise if we don’t act to regulate this technology and discuss why it’s crucial to tackle these dangers before they spiral out of control.
The paragraph you just read was completely fabricated by ChatGPT, a form of generative artificial intelligence (AI). You most likely noticed a few indicators that it was generated, such as the oddly rigid sentence structure or robotic tone, but with a few changes in the prompt, this platform could potentially create a descriptive and coherent article that sounds like it was written by an actual person. While this may not seem very much of a threat, if unregulated, a dystopian future is very much possible with privacy violations and unprecedented power for those who can control AI.
A major rising threat with the development of AI is the use of deep fakes, a form of media where the subject is swapped with the intended target’s likeness. For example, in the film “Rogue One: A Star Wars Story,” studios were able to recreate the likeness of the late Peter Cushing’s character Grand Moff Tarkin, and Carrie Fisher’s Princess Leia [1]. Using photoreal facial animation, the ILM studio used “facial performance-capture solving systems” to bring these characters to life [1]. This concept of facial capturing has been evolving across multiple movies, especially CGI-generated films, and could continue to improve with the use of artists and AI alike to fill in the gaps and create realistic imagery. Another example is for historical purposes, using AI to translate a famous speech into another language while generating the voice of the original speaker, which could emphasize how people who spoke that language viewed that particular person compared to those who didn’t. Though this is a remarkable feat for attaching one’s likeness, there are other common uses that deep fakes are used for.
Since its naming in 2017 by a Reddit user with the same name, the term has been expanded to include a variety of “synthetic media applications” including realistic images of people that don’t exist [2]. Additionally, other examples can include videos with another person’s face edited onto the subject’s original face or a voice that sounds like a family member or your boss. These can be used in positive and negative scenarios, but the question becomes whether it is possible, or a good idea, to continue developing something with seemingly limitless possibilities. Alexander C. Karp, CEO of Palantir Technologies, a data software company, summarizes it best stating “We will again have to choose whether to proceed with the development of a technology whose power and potential we do not yet fully apprehend” [3].
This is a point that must be understood to fully regulate generative AI and its capacity to deep fake almost anything or anyone. In fact, according to Matt Groh, a research assistant with the affective computing group at the MIT Media Lab, all someone has to do is swap someone’s face and replace it with another using a facial recognition algorithm and a “deep learning computer network called a variational auto-encoder [VAE]” [2]. For someone who isn’t tech-savvy, this may seem like a daunting task but with enough practice and precision thousands of fake videos can appear on the internet with a celebrity or politician committing illicit acts, stating false information, or even declaring war on another nation.
For a more clear-cut example, Meredith Somers, a news writer for Massachusetts Institute of Technology (MIT), references a deep fake combining Nixon’s resignation speech and “the text of an in-memoriam draft speech that had been written by Nixon speechwriter Bill Safire in case of a failed moon landing” [2]. The purpose of this video was to be transparent about using AI to fabricate someone’s voice or likeness, but still tricked many individuals into thinking that the video was an unaired version of the speech.So imagine a video on the internet of a celebrity endorsing a product. However, instead of the actual celebrity promoting it, generative AI is used to mimic their voice and image. This happened to actor Tom Hanks in August of 2024, where his name and likeness were used to promote miracle cures and wonder drugs [4]. He had a similar issue in October of 2023 promoting a dental ad when in reality AI was also used to age down the actor. As a result of both of these issues, he was forced to make multiple statements on social media in order to warn consumers not to trust these companies using his likeness. However, by the time he was able to warn others, it is highly likely that many individuals were fooled into buying these miracle drugs. This is a prime example of how these types of deep fakes and warping can be replicated by users who may not have much experience with technology, and with the right tools, can fool millions.
Deep Fakes extend beyond just videos and images; they also manifest in phone calls. For example, my grandparents received a call from my brother saying that he was in jail and needed a certain amount of money transferred by midnight so that he could be bailed out. Despite it not actually being my brother's voice, my grandparents, who were unfamiliar with deep fakes, had noted how the tone and structure closely mimicked it. If they had not called my father afterwards, it is highly likely that they would have transferred the money over and lost thousands of dollars to scammers. This kind of scam is becoming extremely common, with thousands of scam companies utilizing these techniques in order to steal money from unassuming victims. Additionally, in March 2019, the CEO of an energy firm listened to his boss’s voice over the phone as he ordered the transfer of € 200,000 ($216,240) to a supplier in Hungary [2]. Since the boss’s voice had the same tone and slight German accent the CEO was used to, the transaction was carried out. Instead of being transferred to the supplier, the money was moved to Mexico and then channeled into other accounts, and the energy firm was forced to report the incident to the company’s insurance. Left unregulated, these incidents can drastically increase and affect both individuals and corporations alike with millions of dollars extracted.
Eliezer Yudkowsky, a research lead at the Machine Intelligence Research Institute, succinctly warns, “Shut it all down. We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong” [5]. Considering the legal and ethical implications of deepfakes as well as the current lack of regulation, this is a very justifiable opinion. However, getting rid of such a technological advancement, one that can be used beneficially in the medical or academic field, may not be a plausible path to choose. Instead, the real question is how we regulate generative AI, specifically in the use of deep fakes. This would have to come from some form of government regulation on how to verify artificially generated images, videos, and voices.
In the meantime, the best way to get around these is to utilize deep fake detectors or apps or companies that can scan a specific piece of media to determine if it’s artificially generated. This is especially important since the targets of deep fakes are now more than just politicians or actors, but also business owners and families. According to Meredith Somers’ article, some potential ideas to combat deep fake calls specifically include using a secret question to ask or answer at the start of a call as well as educating both family members and employees on deepfake attacks that are becoming more frequent [2]. The best weapon to use against these tactics is knowledge and skepticism, as readily believing any type of video is real is no doubt going to end up in disaster.
In a world increasingly influenced by artificial intelligence, the rise of deepfakes presents profound challenges that cannot be ignored. While generative AI technology offers remarkable opportunities for innovation, its development could pose significant risks to security. The examples discussed highlight that deep fakes are already impacting lives and could escalate without proper safeguards in place. However, effective strategies, such as implementing deepfake detection tools and fostering educational initiatives, are crucial in combating these threats.
AI is not the end of the world. In fact, I actually experimented with multiple forms of generative AI to see how it would format this specific type of paper. The beneficial possibilities are essentially endless, provided that the proper regulations are put in place. Nonetheless, the responsibility lies with both policymakers and society to ensure that we harness the power of AI responsibly. By prioritizing skepticism and critical thinking, we can mitigate the dangers posed by deep fakes and safeguard the integrity of our information landscape. The future of AI should enhance our lives rather than undermine them, and that starts with taking proactive steps today.
References
[1] B. Desowitz, “rogue one”: How ilm created CGI Grand Moff tarkin and princess leia, IndieWire. (2017). https://www.indiewire.com/awards/industry/rogue-one-visual-effects-ilm-digital-grand-moff-tarkin-cgi-princess-leia-1201766597/ (accessed October 28, 2024).
[2] M. Somers, Deepfakes, explained, MIT Sloan. (2020). https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained (accessed October 28, 2024).
[3] A.C. Karp, Our Oppenheimer moment: The creation of A.I. Weapons, The New York Times. (2023). https://www.nytimes.com/2023/07/25/opinion/karp-palantir-artificial-intelligence.html (accessed October 28, 2024).
[4] C. Blackwelder, Tom Hanks warns followers about AI-generated ads using his name, likeness and voice, ABC News. (2024). https://abcnews.go.com/GMA/Culture/tom-hanks-warns-followers-ai-generated-ads/story?id=113272901#:~:text=%22There%20are%20multiple%20ads%20over%20the%20internet%20falsely,or%20the%20spokespeople%20touting%20these%20cures%2C%22%20he%20continued (accessed October 28, 2024).
[5] E. Yudkowsky, The only way to deal with the threat from ai? shut it down, Time. (2023). https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ (accessed October 28, 2024).
Comments
Post a Comment