Facebook Is Rating the Trustworthiness of Its Users on a Scale From 0 to 1

Advertisement
By Elizabeth Dwoskin, The Washington Post | Updated: 22 August 2018 10:49 IST
Highlights
  • Facebook has begun to assign its users a reputation score
  • It was developed as part of effort against fake news: Facebook
  • "It isn't meant to be absolute indicator of person's credibility"

Photo Credit: Bloomberg photo by Andrew Harrer

Facebook has begun to assign its users a reputation score, predicting their trustworthiness on a scale from zero to one.

The previously unreported ratings system, which Facebook has developed over the last year, shows that the fight against the gaming of tech systems has evolved to include measuring the credibility of users to help identify malicious actors.

Facebook developed its reputation assessments as part of its effort against fake news, Tessa Lyons, the product manager who is in charge of fighting misinformation, said in an interview. The company, like others in tech, has long relied on its users to report problematic content - but as Facebook has given people more options, some users began falsely reporting items as untrue, a new twist on information warfare that it had to account for.

Advertisement

It's "not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they're intentionally trying to target a particular publisher," said Lyons.

Advertisement

Users' trustworthiness score between zero and one isn't meant to be an absolute indicator of a person's credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioural clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic, and which publishers are considered trustworthy by users.

Trump Says It Is 'Dangerous' for Twitter, Facebook to Ban Accounts

It is unclear what other criteria Facebook measures to determine a user's score, whether all users have a score, and in what ways they're used.

Advertisement

The reputation assessments come at a moment when Silicon Valley, faced with Russian meddling, fake news, and ideological actors that abuse the company's policies, is recalibrating its approach to risk - and is finding untested, algorithmically-driven ways to understand who poses a threat. Twitter, for example, now factors in the behaviour of other accounts in a person's network as a risk factor in judging whether a person's tweets should be spread.

But how these new credibility systems work is highly opaque, and the companies are wary of discussing them, in part because doing so might invite further gaming - a predicament that the firms increasingly find themselves in as they weigh calls for more transparency around their decision-making.

Advertisement

"Not knowing how [Facebook is] judging us is what makes us uncomfortable," said Claire Wardle, director of First Draft, research lab within Harvard Kennedy School that focuses on the impact of misinformation and is a fact-checking partner of Facebook, of the efforts to assess people's credibility. "But the irony is that they can't tell us how they are judging us - because if they do the algorithms that they built will be gamed."

The system Facebook built for users to flag potentially unacceptable content has in many ways become a battleground. The activist Twitter account Sleeping Giants called on followers to take technology companies to task over the conservative conspiracy theorist Alex Jones and his Infowars site, leading to a flood of reports about hate speech that resulted in him and Infowars being banned from Facebook and other tech companies' services. At the time, executives at the company questioned whether the mass-reporting of Jones' content was part of an effort to trick Facebook's systems. False reporting has also become a tactic in far-right online harassment campaigns, experts say.

Tech companies have a long history of using algorithms to make predictions about people, from how likely they are to buy products to whether they are using a false identity. But with the backdrop of increased misinformation, now they are making increasingly sophisticated editorial choices about who is trustworthy.

In 2015, Facebook gave users the ability to report posts they believe to be false. A tab on the upper right-hand corner of every Facebook post lets people report problematic content for a variety of reasons, including pornography, violence, unauthorised sales, hate speech, and false news.

Lyons said that she soon realized that many people were reporting posts as false simply because they did not agree with the content. Because Facebook forwards posts that are marked as false to third-party fact-checkers, she said it was important to build systems to assess whether the posts were likely to be false in order to make efficient use of fact-checkers' time. That led her team to develop ways to assess whether the people who were flagging posts as false were themselves trustworthy.

"One of the signals we use is how people interact with articles," Lyons said in a follow-up email. "For example, if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person's future false news feedback more than someone who indiscriminately provides false news feedback on lots of articles, including ones that end up being rated as true."

The score is one signal among many that the company feeds into more algorithms to help it decide which stories should be reviewed.

"I like to make the joke that, if people only reported things that were [actually] false, this job would be so easy!" said Lyons in the interview. "People often report things that they just disagree with."

She declined to say what other signals the company used to determine trustworthiness, citing concerns about tipping off bad actors.

© The Washington Post 2018

 

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: Facebook, Fake News
Advertisement

Related Stories

Popular Mobile Brands
  1. OTT Releases of the Week (Oct 13th - Oct 19th): What to Stream This Weekend?
  2. OnePlus 15 Confirmed to Debut in These Three Colourways
  3. Marvel's First Family Could Arrive on OTT Platforms Soon
  4. iQOO 15 Spotted on NBTC Certification Website as Global Launch Nears
  5. Oppo Find X9 Series India Launch, Colourways Confirmed After China Debut
  6. Kantara: A Legend Chapter-1 Lands on Amazon Prime Video Soon
  7. You Can Now Generate Sora 2-Powered AI Videos on the Web
  8. Microsoft Rolls Out Deep Copilot Integration Across Windows 11
  1. Vast Space to Launch Haven-1, the World’s First Private Space Station in 2026
  2. Atmospheric Carbon Dioxide Soars to 424PPM, Marking Biggest Yearly Jump Ever
  3. Black Hole Tears Star Apart, Sends Out Powerful Flares Six Months Later
  4. Shakthi Thirumagan OTT Release: When, Where to Watch Vijay Antony-Starrer Action Thriller Online?
  5. Former Assassin's Creed Boss Says He Was Asked to 'Step Aside' by Ubisoft
  6. Arshad Warsi's Bhagwat Chapter 1: Raakshas OTT Release: Everything You Need to Know About This Thriller
  7. Vivo Confirms OriginOS 6 Update Rollout Schedule in India: Check Full Release Timeline
  8. Huawei Nova Flip S Launched With 4,400mAh Battery, 2.14-Inch Cover Screen: Price, Features
  9. The Fantastic Four: First Steps Reportedly Set for OTT Debut Soon: All You Need to Know
  10. Huawei Nova 14 Vitality Edition Launched With 5,500mAh Battery, 50-Megapixel Selfie Camera: Price, Specifications
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.