Twitter Turns to Algorithms to Clamp Down on Abusive Content

Advertisement
By Abby Ohlheiser, The Washington Post | Updated: 2 March 2017 11:33 IST

Twitter announced a bunch of mostly iterative changes Wednesday in its fight against abuse. But one was particularly welcome to users who have ever experienced an onslaught of anonymous harassment on the platform: It's finally possible to filter out accounts with the default "egg" profile picture, so that they don't appear in your notifications.

Sure, it's considered very bad form on Twitter to keep your profile picture as the default egg, but that's not why this overdue change is useful. Twitter makes it very easy for anyone to create new accounts, including those who make "throwaway" Twitter handles specifically for the purpose of harassing someone else. This change makes it harder for those accounts to reach their intended targets.

In addition to introducing a notification filter for anyone without a custom profile picture, Twitter will also let you filter out notifications from users who haven't bothered to verify their email addresses or phone numbers.

Advertisement

Twitter said the filters would be available to "everyone on Twitter" once they roll out, and provided instructions for how to turn it on via the iPhone app.

Advertisement

The platform also improved upon its rollout of a "mute by keyword" feature for notifications that was first introduced in November. Users can now also mute keywords, phrases and conversations from their timelines, and set time limits for how long those mutes will last. If I wanted to keyword mute, say, mentions of "The Walking Dead" or whatever show I don't watch that everyone else on Twitter loves, I could set a mute for the relevant keywords, and let it automatically expire after a day, a week or a month. Or, I could let the mute live on indefinitely, which is tempting.

Ed Ho, Twitter's vice president of engineering, acknowledged in a blog post that the two changes were widely requested from Twitter's user base.

Advertisement

The company has long struggled to effectively address the abuse and harassment problem that plagues many of Twitter's users, not to mention the reputation of Twitter itself. In recent months, it's tried to do more about it, rolling out long-requested tweaks to its safety procedures. Twitter is also clearly trying to regain the trust of users who have given up on its ability to effectively solve this problem, which brings us to another announced change on Wednesday: Algorithms are starting to play a bigger role in how Twitter identifies potential abuse.

Twitter Reverses Decision to Stop Notifying Users When They’re Added to Lists

Twitter is starting to "identify accounts as they're engaging in abusive behavior, even if this behavior hasn't been reported to us," Ho wrote. And when its algorithms do detect potentially abusive behavior, Twitter is issuing temporary limitations on those accounts. Some have asked Twitter to be more proactive in identifying potential abuse, instead of simply relying on user reports and the moderators who evaluate those reports.

Advertisement

But the rollout of this change hasn't been without controversy. The "timeouts" freaked out a bunch of users last week, who noticed them before Twitter's official announcement, because it wasn't clear what exactly was prompting these timeouts to be issued. Some users who, for instance, swore at the official account of the vice president were triggering these punishments last week. People started speculating that Twitter was starting to punish any account that swore at a verified user, reviving criticism of how Twitter handles high-profile instances of abuse and harassment.

There's plenty to be said about Twitter's longtime inconsistency in enforcing its own abuse policies, and the role that media attention has played in getting the company to take action. On Wednesday, Twitter confirmed that the new timeouts might be triggered "if an account is repeatedly tweeting without solicitation at non-followers," among other factors.

Ho wrote that the company's aim is "to only act on accounts when we're confident, based on our algorithms," that abusive behavior has occurred, but acknowledged that the new process will inevitably make "mistakes."

© 2017 The Washington Post

 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Advertisement

Related Stories

Popular Mobile Brands
  1. Oppo A6i+ 5G, A6v 5G With 50-Megapixel Cameras Launched at These Prices
  2. Sony Has Patented a PlayStation Controller Design Without Any Buttons
  3. Realme Buds Air 8 Review: Big on Features, but There's A Catch
  1. Scientists Discover Cosmic Clock in Zircon Crystals That Tracks Earth’s Landscape History
  2. NASA Confirms Axiom Mission 5 Private Astronaut Launch to ISS in Early 2027
  3. Mountain Climbing Indie Game Cairn Sells 200,000 Copies on PC, PS5 in 3 Days
  4. Sony WF-1000XM6 Price, Launch Timeline and Key Specifications Leaked
  5. Vivo Y21 5G and Vivo Y11d Listed on Malaysia's SIRIM Database, Might Launch Soon
  6. UK Watchdog Wants Google to Let Publishers Opt Out of AI Overviews
  7. Budget 2026: Government Proposes Penalties for Inaccurate Reporting of Crypto Assets
  8. Om Shanti Shanti Shantihi OTT Release Reportedly Revealed Online: What You Need to Know
  9. Cristina Kathirvelan Now Available for Streaming on Tentkotta and Aha Tamil
  10. Samsung Galaxy S26 Series Will Reportedly Support Google's Pixel-Exclusive Scam Detection Feature
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.