The Supreme Court has handed down a decision in a recent court case, Gonzalez v. Google. The case was brought forth after the family of Nohemi Gonzalez, a victim of a terrorist attack in 2015 blamed YouTube for pushing radicalizing content.
In 2020, then president Donald Trump signed an executive order on “Preventing Online Censorship”. In the wake of the order, the FCC requested clarification on Section 230 of the 1996 Communications Decency Act.
The provision has been left in an unclarified state as no court case has seemingly tested its limits. Under the provision, platforms like Twitter, Facebook, and other social media sites are protected from the content put on their site by users. The provision says:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
In the case of Gonzalez v. Google, the Supreme Court left in place a lower court ruling in Google’s favor. What makes this case new compared to the other tests to the provision, is how algorithmic suggestions play a role in distributing extremist content.
One argument, is that the algorithm represents the company’s speech and thus content promoted by the algorithm should be treated as if the platform is acting as a “Publisher”. This would open them up to liability for the nature of the content.
However the current interpretation appears to view algorithms as passive parts of a platform’s function and not part of the company’s speech.