O texto apresentado é obtido de forma automática, não levando em conta elementos gráficos e podendo conter erros. Se encontrar algum erro, por favor informe os serviços através da página de contactos.
Não foi possivel carregar a página pretendida. Reportar Erro

17.4. provide for minimum standards for the working conditions of human moderators, including a requirement of adequate training to carry out their often stressful tasks and of access to proper psychological support and mental healthcare when needed;

17.5. sign and ratify the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (CETS No. 225, “Vilnius Convention”) and adopt or maintain measures to ensure that adequate transparency and oversight requirements tailored to the specific contexts and risks are in place to meet the challenges of the identification of content generated by artificial intelligence systems;

17.6. require that content generated by artificial intelligence is disclosed as such by those initially posting it and that social media implement technical solutions allowing for such content to be easily identified by users, and encourage collaboration between social media companies to ensure the interoperability of watermarking techniques for content generated by artificial intelligence;

17.7. require that out-of-court dispute settlement bodies, when established, are independent and impartial, have the necessary expertise, are easily accessible, and operate according to clear and fair rules, with certification of these requirements by the competent national regulatory authority;

17.8. promote, within the Internet Governance Forum and the European Dialogue on Internet Governance, reflection on the possibility for the internet community to develop, through a collaborative and, where appropriate, multi-stakeholder process, an external evaluation and auditing system aimed at determining whether algorithms are unbiased and respect the right to freedom of expression, and a “seal of good practices” which could be awarded to social media whose algorithms are designed to reduce the risk of “filter bubbles” and “echo chambers” and to foster an ideologically cross-cutting, yet safe, user experience.

18. The Assembly calls on social media companies to avoid measures that unnecessarily restrict thefreedom of expression of users. They should, in particular:

18.1. directly incorporate principles of fundamental rights law, and in particular freedom of expression, into their terms and conditions;

18.2. use caution when moderating content that is not obviously illegal;

18.3. provide users with terms and conditions that are readily accessible, clear and informative on the types of content that are permissible on their services and the consequences for non-compliance, and which are understandable to the wide span of users notwithstanding differing levels of digital literacy and reading proficiency;

18.4. notify users without undue delay of any moderation action taken on their content, providing a comprehensive account of the rationale behind the decision, accompanied by a reference to the internal rules which have been applied;

18.5. refrain from shadow banning users’ content and notify users of every instance of demotion or delisting;

18.6. ensure that automated content moderation processes are subject to human oversight and to rigorous and continuous evaluation to assess their performance;

18.7. make available a system for handling complaints that is easily accessible, user-friendly, and allows users to make a precise complaint;

18.8. give human moderators appropriate training and working conditions which pay attention to the heavy psychological stress they are submitted to, and ensure adequate protection of their health;

18.9. refrain from permanent deletion of content (including its metadata) that has been removed in accordance with legal obligations or with terms and conditions, in particular when the content in question may serve as evidence of war or other crimes;

18.10. ensure that the artificial intelligence systems they develop or use uphold Council of Europe standards, including the new Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law; algorithms should be designed to respect the right to freedom of expression, and to encourage plurality and diversity of views and opinions while ensuring a safe user experience, their operation modalities should be disclosed and users duly informed on how these algorithms filter and promote content;

18.11. collaborate with other online services with the aim of ensuring the interoperability of watermarking techniques for content generated by artificial intelligence;

2 DE ABRIL DE 2025 _______________________________________________________________________________________________________________

77