Picture this: The President of the United States makes a video of his final campaign speech in November, saying the United States is not fit to be the world’s superpower because of its hypocritical democracy, awful moral values and disregard for the rule of law.
Such a statement sounds shocking because the US does not typically stand for these values. Consequently, voters would be irritated by these comments and polls would tip in favor of rival political groups.
What if this video was fake or fabricated? Or even if a candidate likePete Buttigieg’s team intentionally adds artificial laughter to their videos to generate emotions?
Or Michael Bloomberg’s team adds a disinformation video to make him seem to perform better in a debate? (Yes, this is real!)
Misinformation to the public domain is highly likely leading to negative implications. Right?
In recent years, deepfakes have emerged ruining the reputation of people who did not say or do certain things. #Deepfakes generate videos by manipulating words or actions of individuals. Since the 2016 US elections, deepfakes have become popular as people create them to change perceptions or commit fraud.
Facebook, Twitter, and YouTube have all recently announced bans on DeepFakes for the 2020 election.
With the rise of deepfakes, the situation could spiral out of control as the 2020 elections approach. Fake news is increasingly becoming a problem in the US with a Reuters survey showing deepfakes spiking and most Americans worried as well.
Banning deepfakes is gaining momentum in China where the government announced legislation to address this problem with the European Union (EU) drafting laws to prevent fake news threats.
The Era of DeepFakes is Here
Social media platforms such as YouTube, Facebook, Instagram and Reddit have not been spared by the deepfakes problem with analysts forecasting more risks in 2020. YouTube has been stepping up efforts to cut off hate speech on its platform by using detectors on user comments. False information is powerful and with videos nowadays going viral, the likely damage is enormous.
The viral video of Jordan Peele, showing Barack Obama insulting Trump demonstrates the era of deepfakes which pose a serious threat to democracy. At first, it is hard to believe the words coming out of Obama’s mouth from the video, but as you continue watching, you begin to realize odd things and then make judgments. Social media as a phenomenon makes deepfakes technology complicated as inaccurate statements spread faster and cause public harm.
Powered by #artificialintelligence, deepfakes can be made easily and distributed within a short time. AI tools have been used to develop deepfakes which are complicated, therefore, making it difficult to stop deepfakes as developers use these open source technologies to manipulate the public.
Here is another interesting twist: TikTok developed deepfake software in its unreleased feature set. Developers created a feature for users to generate their own deepfakes to increase generative design and media on their platform.
This begs the question: what legitimacy exists for using deepfakes?
Constructive arguments must be heard and reviewed to understand the deepfakes phenomenon and develop proactive solutions.
Fake Audio, Text and Voice
The fast growth of voice technology in Audiobooks, Podcasts and digital media have created the perfect opportunity for #deepfakes where software developers create voices and texts that are fabricated and not real. Lyrebird is a voice company based in Canada that develops voice solutions and admits the growing threat of deepfakes. According to Lyrebird, the voice space is an emerging and rapidly growing industry, but with deepfakes emerging, the gains could be eroded as people use technology for the wrong reasons.
Voice cloning is commonplace with many start-ups including Overdub developing new tools in this area and Replica, which recently graduated TechStars.
Tacotron2 and Wavenet are examples of deep learning text-to-speech software in voice which convert signal waves-information. Developers focus on input sequences from speeches and pick variants to achieve synthetic voices. Samples are created separately by using probability from the network. Undoubtedly, manipulation of video, voice and text is easy given the array of AI solutions available.
#Deeplearning technologies can easily convert voices, develop fake videos and photos as these AIs become more sophisticated. From the Nancy Pelosi video stumbling as drunk, to the fake Joe Rogan voice, there are countless examples demonstrating deepfakes growing in influence. Today, you can create your own robot voice and with the Lyrebird example mentioned, you can clone other people’s voices and pose as them.
Despite the potential benefits held by these #AI technologies, there are prevailing risks that could lead to catastrophic consequences. The Sleepwalkers Podcast episode Truth to Power explores DeepFakes and Fake News in society.
Statistics on Deepfakes
The deepfakes problem is spreading with a recent Pew Research Center poll estimating 70% of American citizens doubting the government because of fake news. Additionally, 55% of US citizens interviewed claimed that fake information influenced their relationship with others because of inaccurate statements.
Compared to 35% of respondents who said terrorism posed major threats to the country, concerns about deepfakes surpassed these levels and estimated at over 50%. The Pew Research Center survey also found that deepfakes concerns were higher compared to other sensitive issues including immigration and racism. 2016 saw the rise of deepfakes technology and from the current rates, forecasts put the levels at 70% market penetration by the end of 2020.
Implications of Deepfakes
Deepfakes can cause more harm than earlier thought and this narrative continues to become true each day. For example, deepfakes can alter election outcomes leading to wrong choices because of tainting people of their integrity. It is illogical to lie during an election and this is well known in the United States and around the world.
Using deepfake technology to persuade voters is morally and ethically wrong because of compromising beliefs that people stand for. Deepfake developers are becoming innovative each day with some exemplifying people and voices using #AI.
The credibility of people comes to question because of deepfakes.
The dangers of deepfakes are real.
Public misinformation means the wrong ideas spreading about individuals, which ultimately lead to negative outcomes.
Expert Analysis on Deepfakes Technology
Deepfakes are on the rise as more people use this technology to undermine public figures or send out the wrong messages to the public. As deepfakes increase, one wonders what the future holds as society becomes polarized by misinformation. Professor Villasenor from UCLA is among experts concerned about the dangers created by deepfakes technology. For instance, he claims “deepfakes creators have the upper hand in elections because of instantly creating videos, voices and photos that misrepresent the reality and influence outcomes within a short time span”.
Villasenor adds, “Deepfakes will become powerful as developers use sophisticated #artificialintelligence tools out in the market”.
It is possible for people to access these fabricated images and information and change their perceptions regardless of the truth out there. For the current internet age, Villasenor admits that viral videos with no authenticity trump the truth since people believe them easily.
Peter Singer, a cybersecurity expert also adds his voice by mentioning “viral shares that make lies stronger”. Social media platforms such as Facebook, Instagram and Reddit enable users to share files, videos and photos hence making the spread of deepfakes easy. Singer claims, “The world is experiencing cyber threats from every corner as people use deepfakes to pursue their interests.”
Legal frameworks and Deepfakes
Solid legal frameworks matter in the era of deepfakes considering the harm this technology can unleash to the public. There are no specific laws in the United States regarding deepfakes but illegality comes to mind when individuals engage in acts including harassment, extortion and fraud. Legislation is critical to control this harmful trend and make the internet a safe haven for users and the public domain.
The Global Data Protection Regulation (GDPR) oversees data from online users by offering controls. As such, online data relates to the deepfakes phenomenon as people handle information and determine how this information moves around. GDPR is applicable to deepfakes, as users are held responsible for their actions. The California Consumer Privacy Act (CCPA) is the second legislation that deals with data from consumers unlike GDPR, which handles diverse stakeholders using data. By extension, the deepfakes problem can benefit from CCPA because of regulating data consumption.
The Malicious Deep Fake Prohibition Act of 2018 aims to end deepfakes by punishing offenders. Every citizen has a right to accurate information and the Malicious Deep Fake Prohibition Act takes care of this clause by reducing the existing dangers. Secondly, the Deepfakes Accountability Act works in the same way and controls how people use information meant for public good.
What is Next for Deepfakes?
From this analysis, it emerges that deepfakes not only cause public harm and panic but also contravene individual rights such as information access enshrined in the constitution. The Nancy Pelosi video example shows the damage deepfakes can cause to public figures or anyone out there who cares about their privacy. Despite efforts to curb the deepfakes, better controls are needed as #AI enables creators to change tactics and to thrive.
Danielle Citron, a professor of law and vice president of the nonprofit Cyber Civil Rights Initiative (CCRI), has devised an eight-point plan for political campaigns this election year to protect against deepfakes, including:
1. Issue a statement that the candidate will not knowingly disseminate fake or manipulated media of opponents and urge campaign supporters to abide by the same commitment.
2. Get familiar with the terms of service and community guidelines for social media platforms on this issue, as well as the processes to report inappropriate content.
3. Designate a team ready to manage an incident.
4. Obtain a briefing on key trends and threats from knowledgeable experts.
5. Conduct an internal red teaming exercise to prepare for the range of ways a fake could target the candidate or campaign.
6. Establish relationships with company officials that will be helpful during an incident.
7. Establish procedures to quickly access original video and/or audio footage.
8. Prepare contingency web content or templates that could be swiftly used to counter false claims.
Companies are not standing by either. Facebook and Microsoft recently collaborated with learning institutions to address deepfakes. Both companies plan to assemble a database of fake clips and images by using advanced detection techniques to develop technology to combat deepfakes once they appear online. Technological innovation is critical in addressing deepfakes because of developing solutions that matter, and this is a step in the right direction.
An audio version of this Medium article is available on Spotify and Apple Podcasts