Meta's recent updates to its content moderation policies have drawn sharp criticism from its independent Oversight Board. In a press release issued Wednesday, the board voiced significant apprehension regarding the rollout of these changes, particularly concerning the apparent lack of preparatory human rights assessment. The board stated that "no public information [was] shared as to what, if any, prior human rights due diligence the company performed" before the policy shifts were enacted. This lack of transparency deviates from expected procedures and raises serious questions about the potential unforeseen consequences of the new rules. The Oversight Board specifically urged Meta to thoroughly evaluate and proactively address the adverse impacts these updated policies might have on vulnerable communities. Particular concern was expressed for LGBTQ+ individuals, including minors, and immigrant populations, who could be disproportionately affected by any weakening of protections against hate speech or harassment. The board's call emphasizes the need for Meta to uphold its commitments under the UN Guiding Principles on Business and Human Rights, which advocate for engagement with affected stakeholders prior to significant policy implementation. This resonates with warnings from external groups like the Human Rights Campaign and Amnesty International, who have previously cautioned about the potential for Meta's policies to normalize harmful content and even fuel violence. Beyond the direct impact on specific communities, the board also cast doubt on the efficacy of newer content moderation tools. It specifically requested that Meta conduct a review of its Community Notes system. The core question revolves around whether this system is as effective as traditional third-party fact-checking in combating the rapid proliferation of misinformation, especially in volatile situations where false narratives can pose direct threats to public safety and social cohesion. The board highlighted the need to understand how these different approaches compare in mitigating harm. The board's intervention underscores a broader pattern of concern regarding Meta's handling of human rights issues, referencing the company's documented role in crises like the Rohingya genocide and ongoing debates about content moderation related to Palestine. In response to the recent policy changes, the board issued 17 recommendations. These include calls for Meta to not only assess and report on the human rights impacts of its new policies every six months but also to strengthen the enforcement of existing rules against bullying and harassment, clarify its definitions of hateful ideologies, and improve the detection of incitement to violence, especially within visual media. While Meta is expected to respond to the Oversight Board's recommendations within 60 days, these suggestions are not binding. The situation highlights the persistent tension between platform governance, the protection of free expression, and the imperative to safeguard human rights. It stresses the critical importance of transparency, rigorous due diligence, and meaningful engagement with potentially affected groups in the development and implementation of content policies that shape online discourse for billions of users worldwide. Ensuring accountability remains a central challenge in navigating these complex issues.