This is Niche Gamer Tech. In this column, we regularly cover tech and things related to the tech industry. Please leave feedback and let us know if there’s tech or a story you want us to cover!
Twitter has announced they’re implementing big changes to how conversations and overall visibility work on their platform, based on a users behavior.
The social media network will now use “thousands” of behavioral indicators when literally filtering replies, searches, and their recommendations (which are based on existing algorithms).
To put it simply, if they believe you’re trying to exploit their platform with fraudulent means of growth or are behaving poorly, they’ll push your tweets off to the bottom of the barrel.
While details on what is deemed as bad behavior are expectedly scant, some examples given were how often you’re blocked by users you interact with, if you’re closely tied to other accounts that are known terms of service violators, if you tweet at large numbers of accounts you don’t follow, if you create lots of accounts from a single IP address, and more.
Twitter has already tested this new filtering system and has noted they’ve seen an 8% drop in abuse reports from conversations, and a 4% drop of abuse reports from a search. They believe these reports indicate something is working for the better.
It’s worth pointing out that these filters are supposed to get an on/off switch at some point, but they will naturally be turned on by default – like their previous attempts at filtering users.
Twitter has been touting these changes as a way to combat abusive users and a means to hopefully block spammers out to some extent.
As with any filtering of conversation, it raises concerns over how easily users will be flagged justly or not as having “bad behavior” and therefore be effectively shadowbanned, like users have already been in the past. It’s very possible these changes are the official terminology for shadowbanning a user, or the newest form of the system.
How do you feel about Twitter applying even more filters on their system to try and stop “abusive” behavior? Should the company filter conversation at all? Sound off in the comments below!