Meta's Content Moderation Policy Faces Scrutiny Amid Big Tech Absence

Chỉnh sửa bởi: Veronika Nazarova

The Brazilian Attorney General's Office (AGU) held a public hearing on January 22 to discuss Meta's new content moderation policy, which affects Instagram, WhatsApp, and Facebook. Notably absent were representatives from major tech companies such as Google, YouTube, Discord, Kwai, LinkedIn, Meta, TikTok, and X (formerly Twitter), raising questions about industry engagement in regulatory discussions.

The outcomes of this hearing will be compiled into a document for the Supreme Federal Court (STF), where Article 19 of the Internet Civil Framework is under review. This article addresses the liability of platforms for illegal content posted by users.

Attorney General Jorge Messias emphasized the government's commitment to dialogue with all platforms, stating that the absence of tech representatives does not hinder the ongoing discussions. He also highlighted the government's focus on protecting children and adolescents, as well as consumers and businesses utilizing social media.

The hearing was prompted by Meta's announcement of changes to its fake news verification policy, which will first be implemented in the U.S. but is expected to expand to Brazil, potentially facilitating the spread of misinformation.

Messias reiterated the federal government's dedication to creating a safe environment for all Brazilians, both online and offline, ensuring that parents feel secure about their children's online activities and that businesses can operate without fear.

Meanwhile, the legislative process for the Fake News Combat Bill (PL 2.630/20) remains stalled due to political pressures, despite a report being ready for vote. The AGU's findings will be forwarded to the STF and Congress for further consideration, as discussions on social media accountability continue.

Bạn có phát hiện lỗi hoặc sai sót không?

Chúng tôi sẽ xem xét ý kiến của bạn càng sớm càng tốt.