The Anti-Defamation League (ADL) have claimed that Steam is “harboring” extremist user posts and content, and recommends radical changes or be shunned. In turn, the ADL have been criticized for overreaching and issuing unrealistic demands.
The post was shared on April 30th, the tweet stating “Steam, the largest and most important online store for PC gamers, offers white supremacists a new outlet for their hateful rhetoric and calls for violence.”
The article entitled “This is Not a Game: How Steam Harbors Extremists” [1, 2, 3, 4, 5] makes several claims. First they explain an incident in June 2019, where Ross Farca had threatened to commit a mass shooting at a synagogue.
His arrest was tipped off due to threats he made through “a chatroom on an online gaming platform.” The ADL states this was through Steam, and that Farca used the online name “Adolf Hitler (((6million))))”.
Boasting Steam’s popularity, they claimed the platform “recently gained popularity among white supremacists for being a platform, like Gab and Telegram, where they can openly express their ideology and calls for violence.”
The ADL also highlight the rise of gaming’s popularity, and how Steam has “direct and lucrative relationships with most major game companies.”
They also state “many of these game companies have made public statements about and dedicated significant resources towards keeping their products safe from the kinds of hateful ideologies espoused by extremists — while continuing to work with Steam.”
Despite even citing their own survey wherein 23% of American respondents stated they “were exposed to extremist white supremacist ideology in online games,” the ADL did state that claims that extremists attempting to recruit or organize through games lacked evidence.
“The evidence of the widespread extremist recruiting or organizing in online game environments (such as in Fortnite or other popular titles) remains anecdotal at best, and more research is required before any broad-based claims can be made.”
Nonethless, the ADL called for more transparency with how Valve dealt with extremism on the platform.
“That being said, the Steam platform is among the growing list of virtual spaces where one can encounter extremist activity. Steam is of specific concern now when considering that the online games where Americans experienced the most harassment, according to ADL’s survey, were Defense of the Ancients 2 or DOTA 2 (79%) and Counterstrike: Global Offensive (75%). Both games were developed by Valve, the company that owns and operates Steam, underscoring a clear need for this and other companies to be more transparent about how extremism functions in online game spaces, and to call particular attention to the role Valve plays in the operating of the Steam platform.”
It should be noted that in Steam’s own rules for “Discussions, Reviews, and User Generated Content”, users’ post and content must not do or contain the following:
- “Flame or insult other members
- Bypass any filters
- Porn, inappropriate or offensive content, warez or leaked content or anything else not safe for work
- Threats of violence or harassment, even as a joke
- Racism, discrimination
- Abusive language, including swearing”
In addition, Steam’s subscriber agreement states “Valve may terminate your Account or a particular Subscription for any conduct or activity that is illegal, constitutes a Cheat, or otherwise negatively affects the enjoyment of Steam by other Subscribers.” Both DOTA 2 and Counterstrike: Global Offensive also offer ways to report and mute players that are abusive.
Citing reports by Vice, the Huffington Post, and the Center for Investigative Reporting (all discussing racist or hateful groups that at one time existed on Steam); the ADL then pointed out when Steam stated they would “allow everything onto the Steam Store, except for things that we decide are illegal, or straight up trolling.”
They also discussed Steam’s later post clarifying, “saying that while it was intentionally vague, the focus of their harm mitigation efforts was game developers who ‘aren’t actually interested in good faith efforts to make and sell games to you or anyone.’ “
However, that situation came about due to a situation where developers anime-styled pornographic games had been erroneously told their games were going to be removed from Steam. This was due to Steam halting adult-game submissions until a new user-filter system had been put in place.
Due to many believing Steam were removing all adult or even anime style content, this was when they clarified that they would “allow everything” that was not illegal or abusive.
The way the ADL used the statement could erroneously taken to mean that Steam was discussing something other than what they allowed on their platform, or poorly made games (such as not focusing on removing abusive content or users).
Even so, the ADL did point out that Steam complied with “a request from the German government, Valve removed 50 instances of Nazi-related user content.”
Though they did state Valve’s announcement that they would not allow Rape Day on Steam was not due to the game’s content, but “but because it would pose ‘unknown costs and risks’ to Valve, developer partners and customers.” Some may feel the ADL was implying that Valve removed the game for reasons other than it being against their terms.
Continuing, the ADL claims that “It was disturbingly easy for ADL’s researchers to locate Steam users who espouse extremist beliefs, using language associated with white supremacist ideology and subcultures, including key terms, common numeric hate symbols and acronyms.”
“In a non-exhaustive, largely random recent search, we were able to identify nearly 200 unique Steam accounts that embraced and propagated Nazi and/or white supremacist ideology. The majority of these profiles trafficked in blatant white supremacist belief evidenced in their screen names, bio descriptions, profile pictures and comments; others either incorporated into their profiles Nazi imagery such as SS bolts and Nazi totenkopfs, or death’s heads, glorified prominent Nazi figures or fantasized about the 4th Reich. And a minority of the profiles displayed deeply antisemitic elements and/or embraced violence against Jews, using terms including “Gas the Jew” and “Smash Jew scum.” The final category includes a subset of users who posted memes involving variations of Pepe the Frog, a cartoon character appropriated by the white supremacist movement, in a context that was specifically racist, antisemitic or otherwise bigoted.These elements and their continued presence on Steam also signal an acceptance of hateful and racist rhetoric and may encourage others to share similar content.”
There is evidence some felt that the Pepe the Frog meme only became a supremacist symbol due to it being used by Trump supporters (who in turn were called supremacist by democrat party supporters), or the instance of fear-mongering news outlets.
The ADL has also been still insisting the OK hand gesture is a white supremacist symbol, despite it being proven as a 4Chan prank [1, 2]. It intended to bait anti-hate organizations and individuals into believing and quickly spouting that it was a racist symbol, on little to no evidence. The ADL has also been criticized for deeming anti-ANTIFA symbols and the phrase “It’s OK to be White” as hateful.
Further, the ADL claimed that WW2 based games “appear to attract users who glorify the Nazi Party, Waffen SS soldiers and prominent Nazi figures, especially Adolf Hitler.”
They also cited a user who in an emoji-laden post encouraged violence against Jews prior to the anniversary of Kristallnacht, users with Nazi imagery or phrases in their profiles, and posted memes and jokes that they claim supported or encouraged white supremacist subculture.
“Bigoted humor and irony are hallmarks of the emerging virtual counterculture that promotes radical, extreme and violent views as cool and/or humorous. In that vein, it’s not uncommon to find references on Steam to white supremacist memes, common vernacular or other trappings of this white supremacist subculture. A significant number of Steam profiles feature Pepe the Frog, a popular Internet meme that was hijacked by the alt right, in clearly white supremacist contexts. For example, user ‘Agent Pepe Kekson’ writes in his bio: ‘Kekson—Pepe Kekson, Agent 1488, With a License to Troll.’
‘Kek’ is a fabricated religion which worships Pepe the Frog. Another user,Honkler, named after a clown version of Pepe, included an animated image of a white supremacist variation of the sonnenrad in his artwork showcase.
Even Steam users that are not explicitly identifiable as extremists often post antisemitic or extremist imagery, including swastikas, graphics of the happy merchant meme (an antisemitic depiction of a Jewish man with heavily stereotyped facial features), or more violent images such as a knife through a Jewish star.”
Along with the irony that its name sounded like the slang for laughter, they began to compare other elements of the deity’s mythology to great changes.
These would be the 2016 US election, and the farcical “meme magic“- repeated discussion of a subject online then occurring in real life, with others joking the event was caused by the discussion acted like a spell. Others have also compared it to The Magician tarot card (representing bringing imagination into reality through will), and genuine memetics.
The ADL also points to examples of users praising Adolf Hitler, neo-Nazis, mass shooters, encouraging abusive behavior against Jews or minorities, or neo-Nazi Steam groups (even in cases where those users or groups had been banned or removed).
They also claim that extremist groups recommended using Steam as an alternative platform, or used gaming terminology.
“Extremists often use gaming terminology in conversations around violence. Violent perpetrators and their adherents refer to mass shootings in terms of getting “high scores.” Stephan Balliet, who launched a (failed) attack on a synagogue in Halle, Germany on Yom Kippur 2019, included a gaming style list of “achievements” in his manifesto. Both the Halle perpetrator and Tarrant livestreamed their attacks on religious communities, ensuring their violence went viral. The footage from their GoPro devices, strapped to the shooters’ heads, mimics first-person shooter games that simulate real-world weapons-based combat scenes.”
Further, the ADL discusses how Jarrett William Smith (a US soldier who distributed bomb-making instructions on social media with the intent to kill ANTIFA members) was a neo-Nazi, and his affiliation with gaming and the Steam platform- “allegedly” using the same username on Steam as the one used to communicate with a neo-Nazi group.
They claim (without citation) that Smith played Hatred, and shared a screenshot from the game where he “took on a Muslim persona and killed in an urban setting. He writes, ‘My playthrough as the Muslim.’ ”
The ADL also claim Smith wanted to mod the game to re-create white-supremacist fantasies, or play as Hitler. They also point out how Hatred was removed and returned to Steam.
Finally, the ADL makes recommendations for changes to Valve. These include (in summery):
- Having a clear terms of service that “address hateful content and harassing behavior, and clearly define consequences for violations,” (along with the process of appeal for those who feel they were flagged in error) and state that they “will not tolerate hateful content or behavior based on protected characteristics.” They must also prohibit abusive tactics such as harassment, doxing, and swatting.
- “Take greater responsibility in enforcing their policies, once expanded, and do so accurately at scale.” Along with user flagging, Steam must also take a “proactive, swift, and continuous process to address hateful content” using a combination of AI and human monitors.
- Regularly consult with civil society, civil rights, and civil liberties groups, and seek out their advice to shape policies.
- Adopt a “robust” governance, with regular transparency reports. These reports would contain data on user reporting and enforcement of Steam’s policies, and the insights of “the experiences of vulnerable communities” using Steam and any abuse they have endured.
- This would be in addition to a regular external audit by a third-party “so that the public knows the scope and nature of hate and harassment on the platform,” and that Steam had enforced their policies. These audits would also need to be transparent,
- “Product level design changes with anti-hate principles in mind. It must conduct a thoughtful design process that puts their users first, and incorporates risk and radicalization factors before, and not after, tragedy strikes. Just as sites have for some years embraced ‘privacy by design’, so should Steam and other platforms embrace ‘anti-hate by design’ principles and processes.”
While the ADL admit this would require “a significant change of focus and culture” for Steam, they state that if Valve did not take such measures, the video game industry should deem the platform as akin to the Telegram or Gab (who they had earlier claimed was popular with white supremacists).
“Should this continue, it may be appropriate for industry, civil society and government to consider the Steam platform in the same category as platforms such as Telegram or Gab. Unlike mainstream platforms such as Facebook or Twitter, these platforms take a wholly hands off or even antagonistic stance towards making their spaces safe, respectful and inclusive for all people. Our recommendations in this category focus less on what the platform in question should do, as they have already proven themselves to be a bad faith actor in this regard. Rather, our recommendations here are aimed towards those companies whose services support and maintain the activities of the platform in question.”
The ADL also has recommendations for the industry as a whole. These include a zero tolerance policy on extremist user content in online games, enforce policies against it, work with civil society experts, and not deal with businesses who are “not doing enough to counter hate and extremism on its services.” They even suggest “addressing those concerns with that company directly and/or publicly.”
Reclaim the Net (an online group dedicated to free speech and individual liberty online) harshly condemned the ADL’s post in their own. Their opening sentence reads “When organizations become increasingly irrelevant, they have to create new enemies in order to stay funded.”
Entitled “The ADL is coming for gamers,” Reclaim the Net state that Steam is only being “targeted” by the ADL due to being a large platform “and therefore ostensibly more ‘dangerous’ and in need of more serious ‘attention.’ ”
They also criticize that the user flagging and removal of over 180 games after Valve stated they would “allow anything” “and other steps taken since have clearly not been enough for the ADL.”
Twitter users also criticized the ADL’s research and claims [1, 2]. Many claimed that the ADL were using the fear of being branded as harboring extremists as a way to be paid for guidance by Steam. “Shock doctrine” (claiming there is a problem, then “solving” it for personal gain) were also one of our concerns surrounding the World Health Organization’s definition of “gaming disorder.”
The proposed changes Steam must made do appear to have flaws. For example, Steam’s rules and TOS already seem to prohibit extremist or hateful content. Monitoring all content all users upload is also something even tech titans like Facebook, Twitter, and YouTube have failed to achieve- as the ADL themselves admit [1, 2].
Much like how some criticize the ADL for overreaching in their goals, civil groups would need to be carefully selected to ensure their advice and audits would be within reason.
Those who then are not selected could end up condemning Valve as “not doing enough.” Valve would also need to ensure those paid for their advice would not exaggerate or invent any issues in order to be continual paid for their services.
Further, redesigning all of Steam to stamp out any hateful content may impose such tight restrictions (waiting times on user submitted content or posts to be reviewed, for example) may cripple any form of communication on the platform. This would put it behind competitors such as the Epic Games Store or GOG.
The use of AI could also be flawed, erroneously flagging content meant to be humorous (mocking extremists through parody for example). YouTube is frequently criticized for how their algorithm for copywritten or offensive content has resulted in mass banning and demonetization of accounts that did not violate their terms- including independent journalists and historical channels.
Algorithms are also proposed for the European Union’s Article 11 and 13. Designed to combat copyright violations, the laws have been criticized for numerous reasons- including doubt as to whether algorithms would be able to distinguish between parody (memes and transformative work), and actual copyright violation (distributing a piece of media in its entirety for free).
The Pentagon announced in 2019 plans to to combat “large-scale, automated disinformation attacks,” by disproving deepfakes and other falsified evidence. The “anti-meme” measures were also criticized for concerns it would not be able to distinguish between jokes and content intended to deceive.
Ultimately the ADL’s argument is that crass and offensive jokes could convert people to extremist ideologies, or make those who had those ideologies feel they were welcome. However, much as they admitted that extremists using online games to recruit or organize was “anecdotal at best,” the same may be true for the ADL’s concerns.
To quote the ADL itself, “more research is required before any broad-based claims can be made.”
What do you think? Sound off in the comments below!