Eric Lieberman, DCNF
Twitter announced Tuesday that it will vamp up its efforts to cleanse its platform by making it harder to view tweets that seemingly don’t contribute to the conversation.
The measure, the company stated, is specifically designed to combat communications that could be regarded as abuse or are from “what some might refer to as ‘trolls.’”
“Some troll-like behavior is fun, good and humorous,” Twitter’s David Gasca, product manager for health, and Del Harvey, vice president of trust and safety, wrote in a company blog post. “What we’re talking about today are troll-like behaviors that distort and detract from the public conversation on Twitter, particularly in communal areas like conversations and search.”
The social media platform is now “integrating new behavior signals into how Tweets are presented” if an account in question hasn’t confirmed its authenticity through an email address, if the same person signs up for multiple accounts all around the same time, for “accounts that repeatedly Tweet and mention accounts that don’t follow them” or for actions that seem to be “a coordinated attack.”
Twitter has for a long time ranked tweets both in search and in threads of communication based on various factors like relevancy, partially decided by usually viewed content. But now that decision-making process will also include the aforementioned negative signals.
Appropriately judging for those ambiguous and apparently difficult-to-detect behavior indications will likely be cumbersome for a company that is pressured to both do more to combat content that is considered by some to be “hate speech,” and ensure that the platform cultivates a free expression ethos.
Using an algorithm for distinguishing purposes might help eschew concerns that individual employees either on their own or at the behest of their superiors are censoring content they don’t like for ideological reasons. But algorithms are often a reflection of their creators, and can potentially be faulty due to unforeseen circumstances or inherent biases.
Still, the Twitter executives said this is just the start.
“Our work is far from done. This is only one part of our work to improve the health of the conversation and to make everyone’s Twitter experience better,” the two Twitter executives continued. “This technology and our team will learn over time and will make mistakes. There will be false positives and things that we miss; our goal is to learn fast and make our processes and tools smarter.”
If Twitter can successfully convince people that it won’t go too far in its latest efforts to purge or downrank certain content and communications will be a hard-pressed task for a company that is often accused of inappropriate censorship.
Send tips to [email protected].
For licensing opportunities of our original content, please contact [email protected]
We know first-hand that censorship against conservative news is real. Please share stories and encourage your friends to sign up for our daily email blast so they are not getting shut out of seeing conservative news.