How Technology Platforms Should Deal with Hostile State-Owned Propaganda Outlets

A statement from Adrian Shahbaz, Research Director for Technology and Democracy

Adrian Shahbaz
Adrian Shahbaz
  • Social media and the internet have opened up new ways for hostile powers to directly abuse and influence individuals in democratic societies.
  • False news, information operations, and online propaganda pose significant but distinct threats to the functioning of a democracy.
  • Responses to these threats have the potential to infringe on fundamental rights, such as freedom of expression, access to information, privacy, and press freedom.
  • Any response must be carefully evaluated to ensure that it is strictly necessary to achieve a legitimate aim (protecting democracy) and carried out in a manner that limits unintended consequences and collateral damage. The cure should not be more harmful than the disease.

Our recommended response:

  • Remove content that is deliberately and unequivocally falseunder policies designed to combat spam or unauthorized use of the platform.
  • Label or eliminate automated “bot” accounts. Recognizing that bots can be used for both helpful and harmful purposes, and acknowledging their role in spreading disinformation, companies should strive to provide clear labeling for suspected bot accounts. Those that remain harmful even if labeled should be eliminated from the platform.
  • Do not remove state-owned media that function as propaganda outlets for hostile powers, unless they violate the platform’s terms of service through actions such as those cited above.
    1. Most articles published by the state-owned propaganda outlets of hostile powers would be difficult to classify as deliberately and unequivocally “false.”
    2. From our current perspective, banning these outlets would constitute a disproportionate response to the problem and could harm press freedom.
    3. We recommend tackling the issue through a more effective, transparent, and uniform application of platforms’ existing policies. Options to consider include down-ranking posts made by these outlets in news feeds, combating the artificial amplification of posts through the use of bots and fake accounts, and restricting the outlets’ ability to buy advertising on the platforms.
    4. We also encourage technology companies to prioritize well-established, credible, and local news sites over state-owned outlets from countries that do not receive a “Free” rating in Freedom House’s Freedom in the World report, such as RussiaChinaTurkeyIranSaudi Arabia, and the United Arab Emirates.
  • Ensure fair and transparent content moderation practices. In order to fairly and transparently moderate public posts on their platforms and services, private companies should do the following:
    1. Clearly and concretely define what speech is not permissible in their guidelines and terms of service.
    2. If certain speech needs to be curbed, consider less invasive actions before restricting it outright, for example warning users that they are violating terms of service and adjusting algorithms that might unintentionally promote disinformation or incitement to violence.
    3. Ensure that content removal requests from governments are in compliance with international human rights standards.
    4. Publish detailed transparency reports on content takedowns—both for those initiated by governments and for those undertaken by the companies themselves.
    5. Provide an efficient avenue for appeal for users who believe that their speech was unduly restricted.
  • Engage in continuous dialogue with local civil society organizations.
    1. Companies should seek out local expertise on the political and cultural context in markets where they have a presence or where their products are widely used. These consultations with civil society groups should inform the companies’ approach to content moderation, government requests, and countering disinformation, among other things.
    2. We appreciate companies’ understanding that tackling the issue of disinformation and false news requires working with media companies and subject-matter experts. Efforts like the Facebook Journalism Project and the News Integrity Initiative provide crucial support for improving individuals’ digital media literacy. That endeavor will take time, but we believe that education is ultimately better than censorship as a tactic for dealing with disinformation, false news, and propaganda.

Twitter recently announced that it would no longer accept advertising from state-owned news sources. The statement above lays out additional steps that technology platforms should take.

Freedom House Press Release

(Visited 94 times, 1 visits today)

Tinggalkan Balasan

Alamat surel Anda tidak akan dipublikasikan. Ruas yang wajib ditandai *

Contact us
Ada yang bisa kami bantu?