The European Commission is asking Facebook, Twitter, Google and others to share more details about what their platforms are doing to curb disinformation.

Caption

The European Commission is asking Facebook, Twitter, Google and others to share more details about what their platforms are doing to curb disinformation. / AP

The European Commission wants to hold Facebook, Google, TikTok and Twitter accountable for disinformation shared on their platforms. Under proposed rules released Wednesday, the European Commission is requesting that the tech companies do more to properly address disinformation online and show proof they've taken action.

The guidance issued Wednesday urges platforms and players in online advertising to block accounts sharing disinformation and ban those that regularly post debunked content. The commission also wants social media sites to improve transparency of political ads, including by properly labeling paid-for content and allowing users to see who is providing those ads.

"Threats posed by disinformation online are fast evolving and we need to step up our collective action to empower citizens and protect the democratic information space," Věra Jourová, the European Commission's vice president for values and transparency, said in a statement.

The commission's proposals issued Wednesday are meant to strengthen its Code of Practice on Disinformation, a voluntary pact signed by the world's largest social media companies in 2018. The signatories at the time included Facebook, Google, Twitter and Mozilla as well as trade associations representing online platforms, the advertising industry and advertisers.

"A new stronger Code is necessary as we need online platforms and other players to address the systemic risks of their services and algorithmic amplification, stop policing themselves alone and stop allowing to make money on disinformation, while fully preserving the freedom of speech," Jourová said.

The proposals announced Wednesday still must be finalized and agreed upon with the social media companies and other stakeholders.

If approved later this year, they would be the strongest steps taken so far by any government entity or country to press the biggest social media sites about how algorithms are used to fill social media feeds and how disinformation takes root on these sites.

Conspiracy theories about election fraud and the COVID-19 pandemic are frequent subjects of disinformation on social media. Facebook and Twitter have implemented tools to better curb disinformation, yet falsehoods still spread largely unchecked.

An NPR analysis found that articles connecting vaccines and death are frequently among the most highly engaged-with content online this year. Enforcement gaps also often leave certain users, particularly those who speak Spanish, vulnerable to disinformation.

The voluntary pact has done little since 2018 to hold platforms accountable, according to reports. A report from the European Court of Auditors is expected to say next week that the current code of practice on disinformation fails to hold platforms accountable for spreading disinformation, according to Politico Europe.

Representatives for Google and Twitter indicated a willingness to work on the commission's guidance to cut down on the spread of disinformation.

In a statement to NPR, a Google spokesman said the company welcomes the publication of the commission's guidance and that the platform supports the goals of reducing the spread of harmful disinformation.

"We will study this guidance carefully to understand how best to interpret it for our diverse products and services," the spokesman said in a statement. "The global pandemic has shown that people need accurate information more than ever and we remain committed to making the Code of Practice a success."

Sinéad McSweeney, Twitter's EMEA vice president for public policy, said in a statement to NPR that the social media giant supports "an inclusive approach that takes a wider look at the information ecosystem to address the challenges of disinformation."

McSweeney said she agrees users "should have choices about the key algorithms that affect their experience online."

She added that Twitter looks forward to working with the commission on the disinformation guidance: "Regionally consistent co-regulatory standards are a crucial element in maintaining the Open Internet, ensuring that platforms of all sizes can operate with confidence around agreed norms."

The European Commission is currently developing stronger legislative proposals targeting harmful content online.

The updated Code of Practice is a precursor to the European Commission's Digital Services Act — a series of separate legislative proposals. Tech companies could face fines of up to 6% of annual revenue if they are not working to stop the spread of harmful content or the sale of illegal goods.

Copyright 2021 NPR. To see more, visit https://www.npr.org.