O texto apresentado é obtido de forma automática, não levando em conta elementos gráficos e podendo conter erros. Se encontrar algum erro, por favor informe os serviços através da página de contactos.
Não foi possivel carregar a página pretendida. Reportar Erro

8. The press and the media in general use social media as a platform for disseminating information to thepublic. It is therefore essential that content moderation practices do not unduly impact media and journalisticcontent that respects professional standards and the national regulatory framework.

9. Content moderation is increasingly carried out by automated means. Artificial intelligence tools aremuch more efficient than human moderators in processing at a high speed the colossal amount of contentcirculating on the web, to identify prohibited content. They lack to date, however, the capacity to fullyunderstand the subtleties of human interaction (humour, parody, satire, etc.) and to assess the content in itscontext.

10. For this reason, human moderators must remain the cornerstone of any content moderation systemand be responsible for making decisions in cases where automated systems are not up to the task. However,human moderation can be biased and lead to inconsistencies among countries due to cultural differences; it istherefore imperative to establish clear and comprehensive standards and to guarantee appropriate training, toensure that all moderators have the requisite knowledge of both the applicable legislation and the company'sinternal guidelines, as well as of the language and the context of the country from which the contentoriginates. However, in the event of a military conflict between two countries, moderators from one countryparty to the conflict should not moderate content originating from the other.

11. Regrettably, despite their fundamental role, human moderators’ working conditions are inadequate,they are overexposed to disturbing content that can cause them serious mental health problems and theysuffer from restrictions on their freedom to speak out about the problems they encounter at work.

12. Generative artificial intelligence tools allow to produce synthetic content that is virtually indistinguishablefrom human generated content. Such content can be highly misleading, be a tool of disinformation andmanipulation, and instigate hatred and discrimination, among other dangers. It is essential that users aremade aware of content that appears to be genuine, but which is in fact not. In this regard, watermarkingtechniques are particularly beneficial but have several drawbacks, including their lack of interoperabilityamong social media services.

13. Independent assessment of terms and conditions and content moderation policies and theirenforcement, also with a view to identifying and promoting best practices, could help to ensure theirconsistency with principles which uphold a human rights-based approach to content moderation.

14. The establishment of clear and transparent rules for conflict resolution is essential to ensure theprotection of users and to minimise the risk of being subjected to a potentially biased decision by the socialmedia company, or of being forced to pursue costly legal action against a multinational corporation withenormous financial resources at its disposal.

15. The establishment of independent out-of-court dispute settlement bodies to assess content moderationdecisions may prove beneficial in enhancing compliance with fundamental rights. Collaboration betweensocial media companies in establishing such bodies could also hopefully facilitate dispute resolution.

16. As stated by the Assembly in its Resolution 2281 (2019) “Social media: social threads or threats tohuman rights?”, social media companies should employ algorithms that promote the diversity of sources,topics and views, guarantee the quality of information available, and thereby reduce the risk of “filter bubbles”and “echo chambers”.

17. In light of these considerations, the Assembly calls on the Council of Europe member States to reviewtheir legislation to better safeguard the right to freedom of expression on social media. In this respect, theyshould in particular:

17.1. require that social media uphold users’ fundamental rights, including freedom of expression, in their content moderation policy and implementation practices;

17.2. require that social media platforms provide justification for any measure taken to moderate content provided by the press or media service providers prior to its implementation and allow them an opportunity to reply within an appropriate timeframe;

17.3. in co-operation with the press or media organisations, implement a system of verification of media and journalist accounts, together with robust mechanisms to protect them from online harassment, hacks and fraud, and develop social media guidelines for press or media organisations on the publication of information on sensitive issues, with a view to avoiding unnecessary moderation restrictions on this type of content;

II SÉRIE-D — NÚMERO 23 _______________________________________________________________________________________________________________

76