Ensuring inclusive social media: the importance of anti-bias training in content moderation

Social media platforms play a big role in shaping public opinion and connecting people worldwide, which can be an empowering and positive experience for everyone involved. 

However, with great power comes great responsibility. Content moderation is a critical function that ensures online spaces stay safe, inclusive, and respectful. 

GLAAD’s 2024 Social Media Safety Index Report highlights a crucial aspect of the responsibility that social media platforms need to take into account: the need for comprehensive training for content moderators—including those employed by contractors—particularly on the needs of vulnerable users, including the LGBTQ+ community.

Why is this important?

Providing thorough and specialized training for content moderators is crucial for several reasons. The main goal is ensuring that all moderation decisions are fair and unbiased, grounded in policy rather than personal beliefs—the way it should be. 

Without specific training, personal biases can inadvertently influence decisions, leading to unfair treatment of certain groups. Even though these biases aren’t necessarily intentional, it’s important for professional content moderators to remove as much of them as possible. 

For LGBTQ+ users, who are unfortunately often targets of online harassment and discrimination, having moderators who do their absolute best to understand their unique challenges and needs is essential for creating a safe online environment. 

Training on these challenges and needs fosters a culture of inclusivity, ensuring that all users—regardless of their background or identity—are treated with respect and dignity.

Some common examples of bias against the LGBTQ+ community

  • Even though everyone has a sexuality, LGBTQ+ people are often defined by their sexuality, even though it is just one part of their identity. This means that some people perceive LGBTQ+ content as overly sexual or even pornographic, when compared to very similar content that is not LGBTQ+ focused. 
  • Transphobic people can sometimes flag trans and nonbinary profiles as “fake” in reference to their gender, when the flag option is meant to be used for impersonation and spam profiles. This can result in incorrect moderation.
  • Policy for nudity is often written in a male/female binary, without clear guidelines for trans and nonbinary users, often resulting in a misunderstanding of the person’s identity and overmoderation of their photos. 

What could anti-bias training include?

To achieve the goal of fair, unbiased policy enforcement, content moderator training should be as comprehensive and multifaceted as possible.

Here at PartnerHero, every new hire (regardless of title or position) goes through basic Diversity & Inclusion and Pronouns training before starting work. We also have more advanced and specific courses for Trust & Safety teams, Team Leads, and Managers.

Here are some key components that anti-bias training might include:

  1. Anti-bias training: this is foundational in helping moderators recognize and mitigate their own biases. Anti-bias training educates moderators about the various forms of bias that can affect their judgment and provides strategies for making more objective decisions.
  1. Information on bias in user reports and AI content labeling: bias can creep in through user reports and AI algorithms. Training should cover how to identify and address these biases to ensure that moderation is consistent and fair.
  2. Cultural sensitivity and LGBTQ+ basics: content moderators need a solid understanding of cultural sensitivity and LGBTQ+ basics. This includes training on (use of) pronouns, sexual orientation, gender identity, etc. This knowledge is crucial for understanding the context behind different user interactions and content.
  1. Descriptions and examples of power dynamics and microaggressions: Understanding power dynamics and microaggressions helps moderators recognize situations where individuals or groups may be “punching down” or using their power to cause harm. This is particularly important in identifying and addressing bullying and harassment.
  1. Descriptions and examples of reclaimed language: Certain terms, historically used as slurs, have been reclaimed by marginalized communities. Training should include descriptions and examples of such language, helping moderators tell the difference between when these terms are used positively within the community versus when they are used derogatorily by others.

By incorporating these elements into their training programs, businesses can better equip their content moderators to handle the complexities of online interactions. 

This not only improves the quality of moderation, but also contributes to a safer, more inclusive online environment for all users.


As social media continues to evolve, so must the strategies we use to manage and moderate content.

GLAAD’s call for comprehensive training for content moderators is a crucial step towards ensuring that all users, particularly those from vulnerable communities like LGBTQ+ individuals, are protected and respected online. 

By investing in such training, every company can demonstrate their commitment to fairness, inclusivity, and the well-being of their diverse user base.

Alice Hunsberger