Darpa is Building Anti-Meme Tech to Stop Deepfakes From Going Viral or Spreading Fake News

This is Niche Gamer Tech. In this column, we regularly cover tech and things related to the tech industry. Please leave feedback and let us know if there’s tech or a story you want us to cover!

The Pentagon has announced plans to to combat “large-scale, automated disinformation attacks,” by disproving deepfakes and other falsified evidence.

RT is reporting that DARPA (the Defense Advanced Research Projects Agency) is aiming to create software that can “automatically detect, attribute, and characterize falsified multi-modal media to defend against large-scale, automated disinformation attacks.”

In short, the “Semantic Forensics” program will scan new stories and social media posts, and utilize algorithms in an attempt to find if something is fake, identify the source behind it, and predict how viral it could be. If

the program is successful after four years of trials, it will expand to target all “malicious intent.” Tests include giving the program 500,000 stories- with 5,000 fakes among them.

In the example given within DARPA’s own proposal, a news story about a violent protest does not match the images used, or audio from a video from the event. The author of the piece would not typically report on that style of news story, and the vocabulary used does not match the author’s usual work. Finally, the story does not source a “high credibility organization.”

Deepfakes have risen into public knowledge recently- the act of using enough footage of an individual to create a fake video or audio of them. Originally these included putting celebrity faces onto pornographic videos (later banned on Reddit), but over time became more elaborate and convincing.

These include a fake speech by former US President Barak Obama, and “Ctrl Shift Face“- a YouTube channel dedicated to using deep-fakes for entertainment purposes. Their most popular videos shows comedian Bill Hader impersonating Arnold Schwarzenegger and Tom Cruise respectively, with his face almost seamlessly shifting into the face of the person he is imitating.

Others include a video of House Speaker Nancy Pelosi, wherein the video has been slowed down in a way that she appears drunk or disorientated. While outlets have shown deep concern, others have claimed the audio distortion in the video should have been enough to indicate it was fake.

Another showed Mark Zuckerberg delivering a sinister speech about social media being used to control others- generated from enough of his public appearances.

NotJordanPeterson.com offered a way for users to enter text, which would then be read out to almost sound like clinical psychologist Jordan Peterson (albeit, monotone). While the welcome message can still be heard, the website has since disabled user input after Peterson’s denouncement on his own website.

Deepfakes have also technically been used by Hollywood- CGI to make an actor or actress look younger, or impose that face onto a different actor. Examples include a younger Schwarzenegger in Terminator Genisys, and Rogue One using CGI versions of Peter Cushing and Carrie Fisher looking as they did in the original Star Wars Trilogy.

The situation with combating fake news is further compounded, as faith in news organizations is crumbling. After the US media failed to predict Donald Trump would win the election, they proposed he had won due to “Russian collusion”- something later disproven during the Mueller Report.

During that time, memes supporting Trump and those intended to troll were considered as the work of “Russian agents” by some outlets, including false claims of being able to vote via tweeting, and candid videos alleging Hillary Clinton had deteriorating health.

Many feel that major news outlets denounced positive stories about Trump (and negative stories about Clinton) by declaring them fake news- a term Trump would later turn on them.

Attempts to tackle fake news have resulted in the same questions time and again- Will those held in high regard be immune to being called fake news? Will such a system be able to distinguish between entertainment, trolling, and intent to deceive? Doubt has even fallen on fact-checking organizations such as Snopes.

Even if such a system had no malice or censorship goals, using an algorithm is also a point of contention. YouTube is frequently criticized for how their algorithm for copywritten or offensive content has resulted in mass banning and demonetization of accounts that did not violate their terms- including independent journalists and historical channels.

Algorithms are also proposed for the European Union’s Article 11 and 13. Designed to combat copyright violations, the laws have been criticized for numerous reasons- including doubt as to whether algorithms would be able to distinguish between parody (memes and transformative work), and actual copyright violation (distributing a piece of media in it’s entirety for free).

What do you think? Sound off in the comments below!


, ,


Ryan was a former Niche Gamer contributor.

Where'd our comments go? Subscribe to become a member to get commenting access and true free speech!