Chaya Raichik, who goes by Libs of TikTok online, believes Planet Fitness is currently facing the “most successful boycott since Bud Light” because the gym chain allows trans women to use women’s locker rooms.
But while Raichik and right-wing media outlets have claimed that the boycott caused the gym to lose “$400 million in value” — something Raichik called “a bloodbath” — there’s no proof, either of the boycott or its alleged financial harm to the gym. In fact, Planet Fitness has stood by its trans-inclusive policies and LGBTQ+ support for years despite right-wing backlash.
Raichik alleges that the boycott started because a Planet Fitness in Alaska banned Patricia Silva, an anti-LGBTQ+ woman who photographed and confronted a transgender woman who was using the women’s locker room without harassing anyone. The gym revoked Silva’s membership and filed a police report against her for photographing people in the locker room, something that is specifically forbidden by the gym’s policies.
Raichik noted that the hashtag #BoycottPlanetFitness has been trending on X. However, there’s no proof that any people using the hashtag are current or former Planet Fitness members. Raichik then claimed that Planet Fitness has “a history of making women unsafe in the women’s locker rooms” because in September 2023, an adult assigned male at birth exposed themself to a 15-year-old girl at a Planet Fitness gym in Georgia.
It’s unclear whether the adult claimed to be transgender. Police arrested the adult and Planet Fitness found they had violated the gym’s anti-harassment and locker room modesty policies. The gym’s policies state that if a member “is acting in bad faith and improperly asserts a gender identity, they may be asked to leave and their membership may be terminated.”
Raichik also pointed to a 2015 incident in which the gym revoked a cis woman’s membership after she repeatedly complained to other gym members and staff about a trans woman in the locker room. The trans woman wasn’t harassing anyone, and she was a guest of another gym member.
“In expressing her concerns about the policy, the member in question exhibited behavior that club management deemed inappropriate and disruptive to other members, which is a violation of the membership agreement and as a result her membership was canceled,” Planet Fitness said in a statement about the 2015 incident.
The Planet Fitness chain is widely known for being a self-professed “judgement free zone,” a welcoming environment where people of all types can exercise without fear of intimidation and ridicule from other members. Its gender identity nondiscrimination policy states that members and guests may use all gym facilities based on their sincere self-reported gender identity. The gym has consistently stood up to transphobic members who harass and threaten trans patrons, even as red state legislators ban trans people from sports teams and school facilities.
Additionally, the chain has observed Pride Month, writing, “We strive to create a community where everyone is included and feels like they belong. Not just this month, but every day.” It has also partnered with the It Gets Better Project to help support LGBTQ+ youth.
However, Raichik claimed that the gym chain is now being punished for its progressive policies. She pointed out that, after news spread about the gym banning Silva, the gym’s stock price dropped from $5.3 billion on March 14 to $4.9 billion on March 19, constituting a $400 million loss in value. However, the business’s stock price was already declining before March 11, when reports of Silva’s revoked membership began spreading online.
In fact, a five-year overview of Planet Fitness’ stock price shows that it usually peaks in January — a time of year when many people usually begin new gym memberships to fulfill their New Year’s resolution to get in shape and lose weight — and then declines in the following pre-summer months. Because the business’s overall profitability is determined by monthly memberships (and this alleged boycott started barely two weeks ago), there’s no concrete proof that the loss of the gym’s stock value is connected to member outrage over its trans-inclusive policies.
Fox Business A chart showing Planet Fitness’ stock price over the last month
Raichik claims that Planet Fitness is “covering up membership cancellations” by members upset over its trans-inclusive policies. As proof, she pointed to a single video of a man who said that gym employees refused to note his disagreement with its trans policies on his official cancellation form. Raichik also posted a recording of a Planet Fitness customer service representative allegedly claiming that Silva’s photo of the trans gym member wasn’t taken at a Planet Fitness gym.
Raichik also claimed that Planet Fitness is trying “to groom kids by giving them ‘queer books’ to confuse them about their identity and introduce them to the extremely harmful radical gender ideology.” In actuality, the gym previously partnered with the It Gets Better Project to provide LGBTQ+-themed books to gay-straight alliances in schools. Raichik and other right-wingers constantly claim that any mention of LGBTQ+ issues to kids is an attempt to “groom” them.
She also blamed Planet Fitness for allowing a registered sex offender, Adam Yindana, to join one of their gyms. However, she didn’t mention that Yindana joined the gym using an alias, that he isn’t trans and never said he was, and that he wasn’t harassing anyone in the women’s locker room. The gym canceled his membership and assisted police with their investigation when a gym member accused Yindana of following her around the facility and recording her with his smartphone.
LGBTQ Nation reached out to Planet Fitness for a comment regarding Raichik’s claims.
In the bustling streets of New York City, the heartbeat of Stuzo Clothingresonates with the vibrant spirit of inclusivity and self-expression. Founded by the visionary Stoney Michelli Love, Stuzo emerges as more than just a fashion brand; it’s a sanctuary where all individuals find solace from judgment and labels. And with Women’s History Month coming to a close, it turns out that celebrating the women that came before and creating gender-free apparel go hand in hand.
Stuzo Clothing’s “Live Your Truth” T-Shirt is available to shop on ThePrideStore.com
With a mission to challenge societal norms, Stuzo Clothing embraces the concept of gender-free clothing, or as Stoney affectionately puts it, “clothes without organs.” In a recent interview, Stoney shared insights into the genesis of Stuzo: “In 2008, during my final year of graphic design studies in the Bronx, I faced homophobia, racism, and sexism, which became a wellspring of inspiration.”
Stoney’s personal experiences with prejudice fueled the creation of Stuzo, culminating in a brand that champions diversity and authenticity. “My shopping experiences were divided by gendered sections, creating a sense of exclusion,” she recalls. “Shopping in spaces lacking a middle ground was frustrating.”
At the core of Stuzo’s philosophy lies an organic creative process, reflecting Stoney’s commitment to authenticity and fluidity. “My creative process is organic. It comes to me,” she explains. “I allow myself to be inspired by anything and everything.”
Drawing inspiration from sources as diverse as royalty and reality TV, Stoney crafts designs that resonate with the bold and non-conforming. “I let things speak to me and things inspire me, and I live my life,” she adds.
Yet, Stuzo’s journey hasn’t been without its challenges. Stoney reflects on the pivotal moment when she redefined the brand as gender-free after a period of self-reflection. “I had to take a step back and check in with myself,” she shares. “After this realization, Stuzo became a gender-free company again.”
Stuzo Clothing’s “FemBoi” Tank is available to shop on ThePrideStore.com
Navigating through personal and professional growth, Stoney acknowledges the transformative power of embracing one’s truth. “There’s a lot of unpacking, learning, relearning, and regaining power,” she affirms. “But I ended up being a better business person because I was able to come back to my truth and values, so I’m grateful for that.”
As Women’s History Month unfolds, Stoney reflects on its significance for Stuzo and herself: “It’s celebrating the female humans that came before us, and fought for our equal rights. It’s celebrating those with us in the now that continue the fight and fighting by simply being themselves. And it’s about celebrating the youth, who are the future ones leading the charge for the balance in humanity.”
Through Stuzo Clothing, Stoney Michelli Love epitomizes the resilience and creativity of women entrepreneurs, inspiring individuals to embrace their authenticity and rewrite the narrative of fashion.
Receive free shipping with promo code “LUCKY” (valid thru 3/31) when you shopThePrideStore.com.
Google has partly disabled its artificial intelligence (AI) image generator Gemini after the software produced racially diverse and historically inaccurate images of Black Vikings, female popes, and people of color as the United States “founding fathers.”
Gemini produced these images without being prompted by users, leading right-wing critics to blast the software as “woke.” However, the incident revealed not only a technical problem but a “philosophical” one about how AI and other tech should address biases against marginalized groups.
Related:
“It’s clear that this feature missed the mark. Some of the images generated are inaccurate or even offensive,” Google Senior Vice President Prabhakar Raghavan wrote in a company blog post addressing the matter.
He explained that Google tried to ensure that Gemini didn’t “fall into some of the traps we’ve seen in the past with image generation technology,” such as creating violent or sexually explicit images, depictions of real people, or images that only show people of just one type of ethnicity, gender, or other characteristics.
“Our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” Raghavan wrote. “[This] led the model to overcompensate in some cases … leading to images that were embarrassing and wrong.”
He then said that Gemini will be improved “significantly” and receive “extensive testing” before generating more images of people. But he warned that AI is imperfect and may always “generate embarrassing, inaccurate, or offensive results.”
While some right-wing web commenters, like transphobic billionaire Elon Musk, accused Gemini of being “woke,” this sort of problem isn’t unique to Google. Sam Altman, the gay CEO of OpenAI, acknowledged in 2023 that his company’s technology “has shortcomings around bias” after its AI-driven ChatGPT software generated racist and sexist responses. Numerous kinds of AI-driven software have also exhibited bias against Black people and women, resulting in these groups being falsely labeled as criminals, denied medical care, or rejected from jobs.
Such bias in AI tech occurs because the technology makes its decisions based on massive pre-existing data sets. Since such data often skews in favor of or against a certain demographic, the technology will often reflect this bias as a result. For example, some AI-driven image generators, like Stable Diffusion, create racist and sexist images based on Western stereotypes that depict leaders as male, attractive people as thin and white, criminals and social service recipients as Black, and families and spouses as different-sex couples.
“You ask the AI to generate an image of a CEO. Lo and behold, it’s a man,” Voxtech writer Sigal Samuel wrote, explaining the dilemma of AI bias. “On the one hand, you live in a world where the vast majority of CEOs are male, so maybe your tool should accurately reflect that, creating images of man after man after man. On the other hand, that may reinforce gender stereotypes that keep women out of the C-suite. And there’s nothing in the definition of ‘CEO’ that specifies a gender. So should you instead make a tool that shows a balanced mix, even if it’s not a mix that reflects today’s reality?”
Resolving such biases isn’t easy and often requires a multi-pronged approach, Samuel explains. Foremost, AI developers must premeditate which biases might occur and then calibrate software to minimize them in a way that still produces desirable results. Some users of AI image generators, for example, may actually want pictures of a female pope or Black founding fathers — after all, art often creates new visions that challenge social standards.
But AI software also needs to give users a chance to offer feedback when the generated outcomes don’t match their expectations. This gives developers insights into what users want and helps them create interfaces that allow users to request specific characteristics, such as having certain ages, races, genders, sexualities, body types, and other traits reflected in images of people.
Sen. RonWyden (D-OR) has tried to legislate this issue by co-sponsoring the Algorithmic Accountability Act of 2022, a bill that would require companies to conduct impact assessments for bias based on the code they use to generate results. The bill wouldn’t require companies to produce unbiased results, but it would at least provide insights into the ways that technology tends to prefer certain demographics over others.
Meanwhile, though critics blasted Gemini as “woke,” the software at least tried to create racially inclusive images, something many other image generators haven’t bothered to do. Google will now spend the next few weeks retooling Gemini to create more historically accurate images, but similar AI-powered image generators would do well to retool their own software to create more inclusive images. Until then, both will continue to churn out images that reflect our own biases rather than the world’s true diversity.
A gay rights activist has criticised Grindr for its trans and nonbinary-inclusive filter.
Currently, the dating app prevents users from searching solely for Cisgender men and women, the site explains. “It was important to us to not further perpetuate discrimination and harm for the trans and nonbinary community.”
Instead, users can filter who they’d like to speak to on the app. The trans-inclusive filter details “three umbrella gender groups”: men, women and non-binary people.
Fred Sargeant took to X (formerly Twitter) on 5 February to question the app’s filter. The app detailed its trans-inclusive filter in a frequently asked questions section of the website.
However, Sergeant criticised the app – which was the most popular among users aged 54+ last year. “So, filtering for gay men is bad but filtering for trans and nonbinary is okay @Grindr? You recognise trans/nonbinary needs to discriminate while invalidating precisely the same need for gay men.
“Delete your service @Grindr. You’re no longer needed.”
Fred Sargeant questioned the app’s filter. (X/@FredSargeant)
Others took to the comments section, with one social media user branding the app “homophobic” for its trans-inclusive filter. “Gay men, @Grindr is no longer for us,” a different person wrote.
A representative of Grindr told PinkNews: “Trans men, trans women, and non-binary people have been a part of the Grindr community from the very beginning, and we have always been committed to creating a safe space for trans users.
“This has included working with the National Center for Transgender Equality on profile fields for gender identity and pronouns, and building a Gender Identity Resource Center so that cis Grindr users can increase their understanding of trans people and issues.
“As part of our commitment to our trans users, we are pursuing a gender filtering system to allow all users, trans and cis, to find who they’re looking for on the app. Users who are only interested in men, for example, can select not to see women and explore a Grindr cascade of men. Of course, trans men are men, so users filtering to see men will see both cis and trans men.”
What is Grindr’s trans-inclusive filter?
The FAQ on the website regarding the trans-inclusive filter reads: “Why can’t I filter for Cis Men or Cis Women?”
“We allow filtering based on gender – you can specify that you want to see men or women – but this will include all men or all women, because trans men are men and trans women are women.
“You can also filter for trans and nonbinary people, as we know it’s critical for this community to be able to find each other easily.”
Grindr explains its trans-inclusive filter on its website. (Grindr)
The FAQs also explain that the “Cis Man/Woman” gender identities have changed to “Man/Woman” as part of their “ongoing commitment to inclusivity and an effort to reduce discrimination toward trans and nonbinary folks”.
Users can edit their gender and pronouns in the app’s settings “to search (and be found) by gender identity”.
“We have a list of 50+ gender identities to choose from – and even provide non-gendered pronoun options for languages that don’t have gendered pronouns. To ensure that culturally specific gender identity terms were included in that list, we partnered with experts across 20 languages.
“We’re still learning every day – and always open to expanding that list. If you have a gender identity or pronoun suggestion for us, please submit it using the Suggest a Gender link on the gender selection screen.”
Since he moved from Atlanta in 2012, Detroit native Kevin Heard has been devoted to one ambitious goal: creating opportunity for LGBTQ+ entrepreneurs to succeed in the challenging business environment of Motor City.
“I didn’t see or was aware of a professional LGBTQ community. I wanted to cultivate that,” Heard told The Detroit News. “I saw a need for organizations that have a fiscal responsibility, voice for and advocate for LGBT-owned businesses. I also felt as though it’s really important to possibly and intentionally curate an LGBT business district within the city of Detroit, like all major metropolitan areas across the nation have.”
She faces off against Trump’s hand-picked election-denier, MAGA Republican Matthew DePerno.
So Heard founded the Detroit LGBT Regional Chamber of Commerce, which has distributed thousands of dollars to up-and-coming small businesses and entrepreneurs to pay for leases, buy equipment, and scale their dreams. Recent contracts for members include a Ford Motor Co. agreement and a pending contract with the NFL Draft when it comes to Detroit in April.
One chamber member is coffee house Eastside Roasterz, a passion project from Tiffany and Riss Dezort, who moved from Washington, D.C., where the LGBTQ+ population is three times higher than in Detroit.
It was a culture shock.
“When it comes to building a business with all of that in mind, that’s really what we went to Kevin for. ‘Hey, would you have a better understanding of queerness and business crossover and how to navigate that here in Michigan?” Riss Dezort said.
The Dezorts have earned over $35,000 in business grants from Michigan organizations, but the biggest boost came from the LGBT Chamber, which provided a 12-week accelerator program and mentorship in navigating the business environment in Detroit.
Members of the LGBT Chamber include Corktown Health, La Feria + Cata Vino, Welcome Home Yoga and Wellness, and the Dezort’s Eastside Roasterz, which supplies coffee for BasBlue, Sister Pie, and Next Chapter Books. The coffee spot also offers wholesale coffee purchases online and operates pop-up shops.
Heard offered, “I’m looking at this as an opportunity to bring more great, innovative young people who would like to stay and live in the state of Michigan. To be inclusive of that, to know that this is a space that people can start their families regardless of sexual orientation, gender identity expression.”
“Discrimination is bad for business… we know this to be true,” out Michigan Attorney General Dana Nessel said recently at a town hall for the LGBT Chamber. “This is not wishful thinking… the more inclusive we are, the more we do to reach out to all communities, the better it is for business in our state.”
People want to live in a place “that will treat them equally and fairly, where they know that they won’t be discriminated against in all different areas of their life,” Nessel said.
But obstacles remain, Heard says.
“The barriers in which LGBT people get when it comes to businesses are the gatekeepers at traditional banks that are maybe homophobic, may have their unconscious biases in when looking at or actually meeting the candidate. They look great on paper, but they don’t like their lifestyle, and that has been honestly one of the biggest barriers.”
Part of the Chamber’s mission, Heard said, is showing LGBTQ+ people in spaces “other than just the typical bar-hopping, Pride parades.”
“We are in every industry, every level of an organization,” Heard said, “and we own more than bakery shops and bars.”
The CEOs of Facebook, Instagram, TikTok, X, Snap, and Discord testified in the Senate on Wednesday to discuss the online exploitation of children. The discussion brought up the Kids Online Safety Act (KOSA), a bipartisan bill that seeks to protect minors from online harm. But KOSA has come under fire from some LGBTQ+ activists and groups who fear that the bill will enable Republicans to block queer youth from seeing age-appropriate LGBTQ+ content online.
Laura Marquez-Garrett, an attorney with the Social Media Victims Law Center, says revisions to the bill have helped ensure that its current version will protect all kids and safeguard against potential misuse by anti-LGBTQ+ politicians. But Evan Greer, director of Fight for the Future, a nonprofit that protects people’s human rights in the digital age, says KOSA unconstitutionally violates free speech rights and will result in social media companies broadly censoring LGBTQ+ content rather than risking lawsuits from attorneys general.
It’s undeniable that social media can negatively impact mental health. Last year, the U.S. Surgeon General issued an advisory noting how the frequency and kinds of information shown to young people on social media can cause a “profound risk of harm” to their mental health.
“Children and adolescents on social media are commonly exposed to extreme, inappropriate, and harmful content, and those who spend more than three hours a day on social media face double the risk of poor mental health including experiencing symptoms of depression and anxiety,” the Surgeon General’s report on Social Media and Youth Mental Health said. Social media’s content and design can also make some young people feel addicted to it, increasing body dysmorphia, low self-esteem, and even self-harming behaviors, the report added.
KOSA tries to remedy this by requiring online platforms to take measures to prevent recommending content that promotes mental health disorders (like eating disorders, drug use, self-harm, sexual abuse, and bullying) unless minors specifically search for such content. KOSA also requires platforms to limit features that result in compulsive usage — like autoplay and infinite scroll — or allow adults to contact or track young users’ location. The bill says platforms must provide parents with easy-to-use tools to safeguard their child’s social media settings and notify parents if their kids are exposed to potentially hazardous materials or interactions.
Furthermore, KOSA requires platforms to submit annual reports to the federal government containing details about their non-adult users, the internal steps they’ve taken to protect minors from online harms, the “concern reports” – or reports platforms issue parents when their child encounters any harmful content – they’ve issued to parents, and descriptions of interventions they’ve taken to mitigate harms to minors. These reports will be overseen by an independent third-party auditor who consults with parents, researchers, and youth experts on additional methods and best practices for safeguarding minors’ well-being online.
KOSA has bipartisan support, including that of President Joe Biden as well as 46 senatorial co-sponsors, 21 of whom are Democrats, including lesbian Sen. Tammy Baldwin (WI) and LGBTQ+ allies like Sen. Amy Klobuchar (MN) and Sen. Elizabeth Warren (MA). LGBTQ Nation reached out to Baldwin and Warren’s offices for additional comment but didn’t receive a response by the time of publication. KOSA is also supported by groups like Common Sense Media, Fairplay, Design It For Us, Accountable Tech, Eating Disorders Coalition, American Psychological Association, and the American Academy of Pediatrics.
But while parents of transgender youth and numerous pro-LGBTQ+ organizations agree that social media can negatively impact young people’s mental health, many other groups have nonetheless opposed the bill, including the American Civil Liberties Union (ACLU), the Woodhull Freedom Foundation, the LGBT Technology Partnership, as well as LGBTQ+ advocacy organizations in six states.
“KOSA is, at its heart, a censorship bill,” Mandy Salley, Chief Operating Officer of the Woodhull Freedom Foundation, a group that advocates for sexual freedom as a fundamental human right, told LGBTQ Nation. “If passed in its current form, we believe that KOSA will hinder the ability of everyone to access information online and negatively harm many communities that are already censored online, including sex therapists, sex workers, sex educators, and the broader LGBTQ+ community. Our human right to free expression cannot be ignored in favor of supposed ‘safety’ on the Internet.”
Shutterstock
The big sticking point: KOSA’s Duty of Care provision
Specifically, Woodhull and the other aforementioned organizations are worried about the bill’s Duty of Care provision that allows attorneys general to conduct investigations, issue subpoenas, require documentation from, and file civil lawsuits against any platforms that have “threatened or adversely affected” minors’ well-being. LGBTQ+ advocates fear that Republican attorneys general who consider LGBTQ+ identities as harmful forms of mental illness will use KOSA to censor such web content and prosecute platforms that provide access to such content.
In a July 2023 Teen Vogue op-ed, digital rights organizer Sarah Philips wrote that the bill “authorizes state attorneys general to be the ultimate arbiters of what is good or bad for kids. If a state attorney general asserts that information about gender-affirming care or abortion care could cause a child depression or anxiety, they could sue an app or website for not removing that content.”
It didn’t help that KOSA was introduced by anti-LGBTQ+ Sen. Marsha Blackburn (R-TN), who has said that one of the bill’s top priorities is to protect children from “the transgender in this culture.”
“[Social media] is where children are being indoctrinated,” Blackburn told the Family Policy Alliance, a conservative Christian organization, in a September 2023 speech. “They’re hearing things at school and then they’re getting onto YouTube to watch a video and all of a sudden this comes to them… They click on something and, the next thing you know, they’re being inundated with it.”
Blackburn’s office told LGBTQ Nation that her comment had been “taken out of context” and wasn’t related to KOSA, stating, “KOSA will not — nor was it designed to — target or censor any individual or community.” But the anti-LGBTQ+ conservative think tank Heritage Foundation has also said it wishes to use the law to “guard” kids against the “harms of… transgender content.”
But Marquez-Garrett told LGBTQ Nation that these concerns are based on an old version of the bill that has since been revised after consultation with concerned LGBTQ+ activists.
“If [the possibility of an attorney general misusing a law is] the standard by which we judge all laws, we’re never going to have new laws because the reality is an unscrupulous attorney general can try,” she said. “But it doesn’t mean they’re going to succeed.”
First, she points out that Philips’s concern about attorney generals suing platforms for not removing pro-LGBTQ+ content doesn’t necessarily apply for two reasons: KOSA doesn’t regulate what LGBTQ+ or allegedly harmful content a site can host — it regulates what content that websites automatically suggest to young users. Users of all ages can still access any material that they deliberately search for.
Moreover, attorneys general have to prove to a judge and the Federal Trade Commission (FTC) that, by KOSA’s definitions, LGBTQ+ content harms young users’ mental health. Such arguments won’t pass muster with every judge or FTC commissioner.
Marquez-Garret noted that after Sen. Blackburn made her concerning comments, the bill was revised with input from queer advocates and reintroduced with amendments meant to account for those concerns. For example, while the original bill broadly required web platforms to prevent all “harms” to minors, the revised bill specifically mentions the harms companies must work against (including suicidal behaviors, eating disorders, substance use, sexual exploitation, and ads for tobacco and alcohol).
She also notes that KOSA says an attorney general who begins civil actions under KOSA will be required to issue a report of any action to the FTC. The FTC will then have the right to intervene.
“The FTC is only as good as the people running it,” Marquez-Garrett told LGBTQ Nation. “And we don’t know what’s going to happen in the future.” But, assuming that the FTC is “not nefarious and is reasonable,” she continued, if the FTC begins an investigation into the actions, the attorney general’s home state is forbidden from taking any additional actions.
Marquez-Garrett also points out that the revised version of KOSA contains a carveout that says that if a minor searches for any sort of content, including LGBTQ+ content, then they’re allowed to see it even if an attorney general considers it harmful. Additionally, KOSA also explicitly excludes many websites from its control, including government platforms, libraries, and non-profits. That means if a minor finds pro-LGBTQ+ content on the websites of the ACLU, The Trevor Project, or the Human Rights Campaign, an attorney general can’t prosecute.
Furthermore, under the revised KOSA, websites aren’t required to install age verification or parental consent functionality that might prevent young people from accessing different platforms. Though Greer questioned how social media platforms can comply with the bill without conducting age verification, Marquez-Garrett says Greer’s question ignores KOSA’s plain language and echoes “another Big Tech narrative about Big Tech’s ability or inability to comply with KOSA.”
Regardless, under KOSA, platforms are also expressly forbidden from being required to disclose a minor’s browsing behavior, search history, messages, contact list, or other content or metadata of their communications that could potentially out them to their parents.
“We totally agree that big tech platforms and the surveillance capitalist business model that they employ are doing real harm, and that they’re specifically harming LGBTQ people and communities,” Greer, director of Fight for the Future (FFF), told LGBTQ Nation. “But as long as KOSA attempts to dictate what content platforms can recommend, it will be unconstitutional.”
FFF and the ACLU have said that the government cannot force platforms to suppress entire categories of content or to suppress all content that might lead to a minor becoming depressed or anxious without violating the First Amendment.
Greer said that legislators behind KOSA should have consulted more with civil liberties and human rights advocates, like her organization and the ACLU, to consider a bill’s potential constitutional and human rights pitfalls.
Marquez-Garrett disagrees with Greer’s characterization, telling LGBTQ Nation, “KOSA does not prohibit content of any sort, nor does it prohibit posting of any content by third parties, so does not run afoul of the First Amendment.”
Apart from the constitutionality issue, Greer most worries that if social companies are subjected to liability for content, they will over-remove content to avoid getting sued. “This is exactly what happened with SESTA,” she said, referencing two bipartisan laws passed in 2018 that sought to reduce sex trafficking online.
Because the law held online companies liable for any user content that could be seen as facilitating sex work, many online businesses just opted to shut down any forums for sex or dating. Others banned any potential “adult content” (including discussion boards), deleted content about avoiding sexually transmitted infections, and created rules forbidding sexual comments. The law made sex workers much more vulnerable to traffickers and made actual sex trafficking much more difficult to track, its critics say. Even Sen. Warren, who supported the law, expressed regret for its unintended consequences.
“Do I think that Mark Zuckerberg is going to go to bat in court to protect my kid’s ability to continue engaging in the online communities that she finds supportive and loving and caring? Absolutely not,” Greer said. “He’s gonna roll over and do whatever he thinks he needs to do to avoid his company getting sued,” she added, especially if they’re threatened by “rogue” attorneys general, conservative judges, or an FTC run by the administration of Donald Trump.
“Do people really want to gamble with trans kids’ lives hoping that we’ll never have a bigot in the White House ever again? I sure don’t,” Greer added.
In an informational white paper, FFF said that if a user searches for “Why do I feel different from other boys,” and a platform returns search results about gender identity, an attorney general can argue that that’s not what the user was searching for, and thus the platform is liable for “algorithmically recommending” that content.
Shutterstock
Is there a way to fix KOSA’s potential problems?
If KOSA becomes law, social media companies won’t risk attracting these attorneys’ attention, Greer and other groups worry. Instead, the companies will react by omitting, algorithmically suppressing, or blocking large swaths of LGBTQ+ content — not just “recommended” served by platform algorithms.
This would affect not only content related to LGBTQ+ issues and other controversial but important topics for users they believe could be minors (including content from The Trevor Project or the Human Rights Campaign, Greer says), but also any users’ or resources’ posts sharing information about queer health resources, life experiences, and social events, Greer predicted, since all social media content is regulated by algorithms.
“I truly believe that legislation [like KOSA] that enables this type of government censorship makes kids less safe, and not more safe,” Greer says. “It feels to me like it’s driven by the same bad thinking behind abstinence-only sex education: the idea that we protect kids by cutting them off from information rather than by allowing them to access it.”
Marquez-Garrett disagrees. “KOSA is plain on its face, and efforts to misinterpret KOSA will not succeed. If a conservative attorney general could simply attack a type of content it doesn’t like, then liberal attorneys general could do the same, such as with guns, or political content, or any number of potentially objectionable topics. And KOSA’s own limitations would provide complying platforms with viable defenses.”
But instead of supporting KOSA in its current form, FFF has encouraged legislators to ditch its Duty of Care provision and replace it with a strict privacy regime that bans any use of minors’ personal data to power algorithmic recommendation systems. The FFF also suggested explicitly prohibiting specific manipulative business practices, like autoplay, infinite scroll, intrusive notifications, and surveillance advertising.
Lawmakers should also drop the provision in KOSA allowing enforcement by attorneys general, the FFF suggests. Instead, its provisions could be enforced by the FTC as “unfair or deceptive business practices,” which the FTC already has a mandate to crack down on. This would aid the law’s constitutionality and bring the law into the realm of regulating these businesses the same way that the federal government already regulates many other businesses.
Some social media platforms and influencers are opposed to any government oversight, Marquez-Garrett says, because policies that limit what their algorithms can recommend also reduce their overall content engagement and, thus, their profits.
Currently, social media platforms aren’t protecting LGBTQ+ kids, she adds. A minor who searches for “gay pride” may be served videos telling them that being gay is bad and that gay people are going to hell and should kill themselves. Platforms also regularly remove LGBTQ+ content for allegedly violating platform policies or potentially offending users in other countries.
She believes that KOSA could help open the playing field for platforms that don’t harmfully target kids because any such actions will become a matter of public record and scrutiny. This will allow ethical web designers to create better systems that protect children’s needs. That’s especially important, she said, since numerous studies have shown that access to positive online LGBTQ+ media and communities can improve young queers’ mental health.
Ultimately, she believes that everyone should support protecting children, especially as more studies show how negative online experiences can increase mental distress and suicidality among kids.
“We cannot give big tech a free pass and assume they have our kids’ best interests at heart,” she said.
We are calling on Facebook and Instagram to do more to make their social media platforms safe for LGBT users who face digital targeting and severe offline consequences including detention and torture.ACT NOW
In February 2023, Human Rights Watch published a report on the digital targeting of LGBT people in Egypt, Iraq, Jordan, Lebanon, and Tunisia, and its offline consequences. The report details how government officials across the MENA region are targeting LGBT people based on their online activity on social media, including on Meta platforms. Security forces have entrapped LGBT people on social media and dating applications, subjected them to online extortion, online harassment, doxxing, and outing; and relied on illegitimately obtained digital photos, chats, and similar information in prosecutions. In cases of online harassment, which took place predominantly in public posts on Facebook and Instagram, affected individuals faced offline consequences, which often contributed to ruining their lives.
As a follow up to the report and based on its recommendations, including to Meta, the “Secure Our Socials” campaign identifies ongoing issues of concern, and aims to engage Meta platforms, particularly Facebook and Instagram, to publish meaningful data on investment in user safety, including regarding content moderation in the MENA region, and around the world.
On January 8, 2024, Human Rights Watch sent an official letter to Meta to inform relevant staff of the campaign and its objectives, and to solicit Meta’s perspective. Meta responded to the letter on January 24.
Social media platforms can provide a vital medium for communication and empowerment. At the same time, LGBT people around the world face disproportionately high levels of online abuse. Particularly in the MENA region, LGBT people and groups advocating for LGBT rights have relied on digital platforms for empowerment, access to information, movement building, and networking. In contexts in which governments prohibit LGBT groups from operating, organizing by activists to expose anti-LGBT violence and discrimination has mainly happened online. While digital platforms have offered an efficient and accessible way to appeal to public opinion and expose rights violations, enabling LGBT people to express themselves and amplify their voices, they have also become tools for state-sponsored repression.
Building on research by Article 19, Electronic Frontier Foundation (EFF), Association for Progressive Communication (APC), and others, Human Rights Watch has documented how state actors and private individuals have been targeting LGBT people in the MENA region based on their online activity, in blatant violation of their right to privacy and other human rights. Across the region, authorities manually monitor social media, create fake profiles to impersonate LGBT people, unlawfully search LGBT people’s personal devices, and rely on illegitimately obtained digital photos, chats, and similar information taken from LGBT people’s mobile devices and social media accounts as “evidence” to arrest and prosecute them.
LGBT people and activists in the MENA region have experienced online entrapment, extortion, doxxing, outing, and online harassment, including threats of murder, rape, and other physical violence. Law enforcement officials play a central role in these abuses, at times initiating online harassment campaigns by posting photos and contact information of LGBT people on social media and inciting violence against them.
Digital targeting of LGBT people in the MENA region has had far-reaching offline consequences that did not end in the instance of online abuse, but reverberated throughout affected individuals’ lives, in some cases for years. The immediate offline consequences of digital targeting range from arbitrary arrest to torture and other ill-treatment in detention, including sexual assault.
Digital targeting has also had a significant chilling effect on LGBT expression. After they were targeted, LGBT people began practicing self-censorship online, including in their choice of digital platforms and how they use those platforms. Those who cannot or do not wish to hide their identities, or whose identities are revealed without their consent, reported suffering immediate consequences ranging from online harassment to arbitrary arrest and prosecution.
As a result of online harassment, LGBT people in the MENA region have reported losing their jobs, being subjected to family violence including conversion practices, being extorted based on online interactions, being forced to change their residence and phone numbers, delete their social media accounts, or flee their country of residence, and suffering severe mental health consequences.
Meta is the largest social media company in the world. It has a responsibility to safeguard its users against the misuse of its platforms. Facebook and Instagram, in particular, are significant vehicles for state actors’ and private individuals’ targeting of LGBT people in the MENA region. More consistent enforcement and improvement of its policies and practices can make digital targeting more difficult and, by extension, make all users, including LGBT people in the MENA region, safer.
As an initial step toward transparency, the “Secure Our Socials” campaign asks Meta to disclose its annual investment in user safety and security including reasoned justifications explaining how trust and safety investments are proportionate to the risk of harm, for each MENA region language and dialect. We specifically inquire about the number, diversity, regional expertise, political independence, training qualifications, and relevant language (including dialect) proficiency of staff or contractors tasked with moderating content originating from the MENA region, and request that this information be made public.
Meta frequently relies on contractors and subcontractors to moderate content, and it is equally important for Meta to be transparent about these arrangements.
Outsourcing content moderation should not come at the expense of working conditions. Meta should publish data on its investment in safe and fair working conditions for content moderators (regardless of whether they are staff, contractors, or sub-contractors), including psychosocial support; as well as data on content moderators’ adherence to nondiscrimination policies, including around sexual orientation and gender identity. Publicly ensuring adequate resourcing of content moderators is an important step toward improving Meta’s ability to accurately identify content targeting LGBT people on its platforms.
We also urge Meta to detail what automated tools are being used in its content moderation for each non-English language and dialect (prioritizing Arabic), including what training data and models are used and how each model is reviewed and updated over time. Meta should also publish information regarding precisely when and how automated tools are used to assess content, including details regarding the frequency and impact of human oversight. In addition, we urge Meta to conduct and publish an independent audit of any language models and automated content analysis tools being applied to each dialect of the Arabic language, and other languages in the MENA region for their relative accuracy and adequacy in addressing the human rights impacts on LGBT people where they are at heightened risk. To do so, Meta should engage in deep and regular consultation with independent human rights groups to identify gaps in its practices that leave LGBT people at risk.
Meta’s over-reliance on automation when assessing content and complaints also undermines its ability to moderate content in a manner that is transparent and lacking bias. Meta should develop a rapid response mechanism to ensure LGBT-specific complaints [in high-risk regions] are reviewed by a human with regional, subject matter, and linguistic expertise, in a timely manner. Meta’s safety practices can do more to make its platforms less prone to abuse of LGBT people in the MENA region. Public disclosures have shown that Meta has frequently failed to invest enough resources into its safety practices, sometimes rejecting internal calls for greater investment in regional content moderation even at times of clear and unequivocal risk to its users.
In the medium term, Human Rights Watch and its partners call on Meta to audit the adequacy of existing safety measures and continue to engage with civil society groups to carry out gap analyses on existing content moderation and safety practices. Finally, regarding safety features and based on uniform requests by affected individuals, we recommend that Meta implement a one-step account lockdown tool of user accounts, allow users to hide their contact lists, and introduce a mechanism to remotely wipe all Meta content and accounts (including from WhatsApp and Threads) on a given device.
Some of the threats faced by LGBT people in the MENA region require thoughtful and creative solutions, particularly where law enforcement agents are actively using Meta’s platforms as a targeting tool. Meta should dedicate resources towards research and engagement with LGBT and digital rights groups in the MENA region, for example, by implementing the “Design from the Margins” (DFM) framework developed by Afsaneh Rigot, a digital rights researcher and advocate. Only with a sustained commitment to actively centering the experiences of those most impacted in all its design processes, can Meta truly reduce the risks and harms faced by LGBT people on its platforms.
Under the United Nations Guiding Principles on Business and Human Rights, social media companies, including Meta, have a responsibility to respect human rights – including the rights to nondiscrimination, privacy, and freedom of expression – on their platforms. They are required to avoidinfringing on human rights, and to identify and address human rights impacts arising from their services including by providing meaningful access to remedies and to communicate how they are addressing these impacts.
When moderating content on its platforms, Meta’s responsibilities include taking steps to ensure its policies and practices are transparent, accountable, and applied in a consistent and nondiscriminatory manner. Meta is also responsible for mitigating the human rights violations perpetrated against LGBT people on its platforms while respecting the right to freedom of expression.
The Santa Clara Principles on Transparency and Accountability in Content Moderation provide useful guidance for companies to achieve their responsibilities. These include the need for integrating human rights and due process considerations at all levels of content moderation, comprehensible and precise rules regarding content-related decisions, and the need for cultural competence. The Santa Clara Principles also specifically require transparency regarding the use of automated tools in decisions that impact the availability of content and call for human oversight of automated decisions.
Human rights also protect against unauthorized access to personal data, and platforms should therefore also take steps to secure people’s accounts and data against unauthorized access and compromise.
The Secure Our Socials campaign recommendations are aimed at improving Meta’s ability to meet its human rights responsibilities. In developing and applying content moderation policies, Meta should also reflect and take into account the specific ways people experience discrimination and marginalization, including the experiences of LGBT people in the MENA region. These experiences should drive product design, including through the prioritization of safety features.
Regarding human rights due diligence, Human Rights Watch and its partners also recommend that Meta conduct periodic human rights impact assessments in particular countries or regional contexts, dedicating adequate time and resources into engaging rights holders.
Many forms of online harassment faced by LGBT people on Facebook and Instagram are prohibited by Meta’s Community Standards, which place limits on bullying and harassment, and indicate that the platform will “remove content that is meant to degrade or shame” private individuals including “claims about someone’s sexual activity,” and protect private individuals against claims about their sexual orientation and gender identity, including outing of LGBT people. Meta’s community standards also prohibit some forms of doxxing, such as posting people’s private phone numbers and home addresses, particularly when weaponized for malicious purposes.
Due to shortcomings in its content moderation practices, including over-enforcement in certain contexts and under-enforcement in others, Meta often struggles to apply these prohibitions in a manner that is transparent, accountable, and consistent. As a result, harmful content sometimes remains on Meta platforms even when it contributes to detrimental offline consequences for LGBT people and violates Meta’s policies. On the other hand, Meta disproportionately censors, removes, or restricts non-violative content, silencing political dissent or voices documenting and raising awareness about human rights abuses on Facebook and Instagram. For example, Human Rights Watch published a report in December 2023 documenting Meta’s censorship of pro-Palestine content on Instagram and Facebook.
Meta’s approach to content moderation on its platforms involves a combination of proactive and complaint-driven measures. Automation plays a central role in both sets of measures and is often relied upon heavily to justify under-investment in content moderators. The result is that content moderation outcomes frequently fail to align with Meta’s policies, often leaving the same groups of people both harassed and censored.
Procedurally, individuals and organizations can report content on Facebook and Instagram that they believe violates Community Standards or Guidelines, and request that the content be removed or restricted. Following Meta’s decision, the complainant, or the person whose content was removed, can usually request that Meta review the decision. If Meta upholds its decision for a second time, the user can sometimes appeal the platform’s decision to Meta’s Oversight Board, but the Board only accepts a limited amount of cases.
Meta relies on automation to detect and remove content deemed violative by the relevant platform and recurring violative content, regardless of complaints, as well as in processing existing complaints and appeals where applicable.
Meta does not publish data on automation error rates or statistics on the degree to which automation plays a role in processing complaints and appeals. Meta’s lack of transparency hinders independent human rights and other researchers’ ability to hold its platforms accountable, allowing wrongful content takedowns as well as inefficient moderation processes for violative content, especially in non-English languages, to remain unchecked.
In its 2023 digital targeting report, Human Rights Watch interviewed LGBT people in the MENA region who reported complaining about online harassment and abusive content to Facebook and Instagram. In all these cases, platforms did not remove the content, claiming it did not violate Community Standards or Guidelines. Such content, reviewed by Human Rights Watch, included outing, doxxing, and death threats, which resulted in severe offline consequences for LGBT people. Not only did automation fail to detect this content, but even when it was reported, the automation was ineffective in removing harmful content. As a result, it barred LGBT people who complained and their requests were denied from access to an effective remedy, the timeliness of which could have limited offline harm.
Human Rights Watch has also documented, in another 2023 report, the disproportionate removal of non-violative content in support of Palestine on Instagram and Facebook, often restricted through automation processes before it appears on the platform, a process that has contributed to the censorship of peaceful expression at a critical time.
Meta also moderates content in compliance with government requests it receives for content removal on Facebook and Instagram. While some government requests flag content contrary to national laws, other requests for content removal lack a legal basis and rely instead on alleged violations of Meta’s policies. Informal government requests can exert significant pressure on companies, and can result in silencing political dissent.
Meta’s insufficient investment in human content moderators and its over-reliance on automation undermine its ability to address content on its platform. Content targeting LGBT people is not always removed in an expeditious manner even where it violates Meta’s policies, whereas content intended by LGBT people to be empowering can be improperly censored, compounding the serious restrictions LGBT people in the MENA region already face.
As the “Secure Our Socials” campaign details, effective content moderation requires an understanding of regional, linguistic, and subject matter context.
Human content moderators at Meta can also misunderstand important context when moderating content. For example, Instagram removed a post of an array of Arabic terms labelled as “hate speech” targeting LGBT people by multiple moderators who failed to recognize the post was being used in a self-referential and empowering way to raise awareness. One major contributing factor to these errors was Meta’s inadequate training and a failure to translate its English-language training manuals into Arabic dialects.
In 2021, LGBT activists in the MENA region developed the Arabic Queer Hate Speech Lexicon, which identifies and contextualizes hate speech terms through a collaborative project between activists in seventeen countries in the MENA region. The lexicon includes hate speech terms in multiple Arabic dialects, is in both Arabic and English, and is a living document that activists aim to update periodically. To better detect anti-LGBT hate speech in Arabic, as well as remedy adverse human rights impacts, Meta could benefit from the lexicon as a guide for its internal list of hate speech terms and should actively engage LGBT and digital rights activists in the MENA region to ensure that terms are contextualized.
Meta relies heavily on automation to proactively identify content that violates its policies and to assess content complaints from users. Automated content assessment tools frequently fail to grasp critical contextual factors necessary to comprehend content, significantly undermining Meta’s ability to assess content. For example, Meta’s automated systems rejected, without any human involvement, ten out of twelve complaints and two out of three appeals against a recent post calling for death by suicide of transgender people, even though Meta’s Bullying and Harassment policy prohibits “calls for self-injury or suicide of a specific person or groups of individuals.”
Automated systems also face unique challenges when attempting to moderate non-English content and have been shown to struggle with moderating content in Arabic dialects. One underlying problem is that the same Arabic word or phrase can mean something entirely different depending on the region, context, or dialect being used. But language models used to automate content moderation will often rely on more common or formal variants to “learn” Arabic, greatly undermining their ability to understand content in Arabic dialects. Meta recently committed to examining dialect-specific automation tools, but it continues to rely heavily on automation while these tools are being developed and has not committed to any criteria to ensure the adequacy of these new tools prior to their adoption.
Meta’s policies prohibit the use of its Facebook and Instagram platforms for surveillance, including for law enforcement and national security purposes. This prohibition includes fake accounts created by law enforcement to investigate users, and applies to government officials in the MENA region that would entrap LGBT people. Accounts reported for entrapment could be deactivated or deleted, and Meta has initiated legal action against systemic misuses of its platform, including for police surveillance purposes.
However, Meta’s prohibition against the use of fake accounts has not been applied in a manner that pays adequate attention to the human rights impacts of people who face heightened marginalization in society. In fact, the fake account prohibition has been used against LGBT people. False reporting of accounts on Facebook for using fake names has been used in online harassment campaigns; unlike Facebook, Instagram does not prohibit the use of pseudonyms. Facebook’s aggressive enforcement of its real name policy has also historically led to the removal of LGBT Facebook accounts using pseudonyms to shield themselves from discrimination, harassment, or worse. Investigations into the authenticity of pseudonymous accounts can also disproportionately undermine the privacy of LGBT people.
The problems that Human Rights Watch and its partners hope to address in this campaign do not only occur on Meta’s platforms. Law enforcement agents and private individuals use fake accounts to entrap LGBT people on dating apps such as Grindr and WhosHere.
Before publishing its February report, Human Rights Watch sent a letter to Grindr, to which Grindr responded extensively in writing, acknowledging our concerns and addressing gaps. While we also sent a letter to Meta in February, we did not receive a written response.
Online harassment, doxxing, and outing are also prevalent on other social media platforms such as X (formerly known as Twitter). X’s approach to safety on its platform has come under criticism in recent years, as its safety and integrity teams faced significant staffing cuts on several occasions.
Meta continues to operate the largest social media company in the world, and its platforms have substantial reach. Additionally, Meta’s platforms cover a range of services, ranging from public posts to private messaging. Improving Meta’s practices would have significant impact and serve as a useful point of departure for a broader engagement with other platforms around digital targeting of LGBT people in the MENA region.
The targeting of LGBT people online is enabled by their legal precarity offline. Many countries, including in the MENA region, outlaw same-sex relations or criminalize forms of gender expression. The criminalization of same-sex conduct or, where same-sex conduct is not criminalized, the application of vague “morality” and “debauchery” provisions against LGBT people emboldens digital targeting, quells LGBT expression online and offline, and serves as the basis for prosecutions of LGBT people.
In recent years, many MENA region governments, including Egypt, Jordan, and Tunisia, have introduced cybercrime laws that target dissent and undermine the rights to freedom of expression and privacy. Governments have used cybercrime laws to target and arrest LGBT people, and to block access to same-sex dating apps. In the absence of legislation protecting LGBT people from discrimination online and offline, both security forces and private individuals have been able to target them online with impunity.
Governments in the MENA region are also failing to hold private actors to account for their digital targeting of LGBT people. LGBT people often do not report crimes against them to the authorities, either because of previous attempts in which the complaint was dismissed or no action was taken, or because they reasonably believed they would be blamed for the crime due to their non-conforming sexual orientation, gender identity, or expression. Human Rights Watch documented cases where LGBT people who reported being extorted to the authorities ended up getting arrested themselves.
Governments should respect and protect the rights of LGBT people instead of criminalizing their expression and targeting them online. The five governments covered in Human Rights Watch’s digital targeting report should introduce and implement legislation protecting against discrimination on the grounds of sexual orientation and gender identity, including online.
Security forces, in particular, should stop harassing and arresting LGBT people on the basis of their sexual orientation, gender identity, or expression and instead ensure protection from violence. They should also cease the improper and abusive gathering or fabrication of private digital information to support the prosecution of LGBT people. Finally, the governments should ensure that all perpetrators of digital targeting – and not the LGBT victims themselves – are held responsible for their crimes.
Spread the word about potential harms for LGBT users on social media platforms and the need for action.
The #SecureOurSocials campaign is calling on Meta platforms, Facebook, and Instagram, to be more accountable and transparent on content moderation and user safety by publishing meaningful data on its investment in user safety, including content moderation, and to adopt some additional safety features.
You can take action now. Email Facebook President of Global Affairs Nick Clegg and Vice President of Content Policy Monika Bickert to act on user safety.
While LGBTQ+ Americans and their advocates have made some progress in achieving greater civil rights, recognition, and representation for the LGBTQ+ community, recent actions suggest that further work is needed in the pursuit of equality.
A new report from Collage Group, a leading provider of cultural intelligence, takes a close look at LGBTQ+ marketing and advertising. Specifically, the study examines how LGBTQ+ Americans feel about recent ads aimed toward their community. It also provides the perspectives of non LGBTQ+ people, as well as the perceptions of liberals and conservatives.
The report examines marketing efforts, specifically LGBTQ+ representation in regard to advertising. It finds that most Americans are either in support of such ads or they are impartial. When LGBTQ+ individuals or groups appear in ads, 71% of the LGBTQ+ segment has positive feelings, as does 31% of non-LGBTQ individuals. Of that 31%, 37% are younger Americans (ages 18-43) and 27% are older (ages 44-77).
Although LGBTQ+ consumers react positively toward commercials that attempt to appeal to them, more than half of these consumers are still skeptical of the brands’ intentions. Those who identify as LGBTQ+ – at a rate of 55% – say that brands’ efforts to woo the LGBTQ+ demographic come across as insincere. Of those who are LGBTQ+ and in the Gen Z age group, 65% say that these campaigns are insincere.
The report also examines backlash as a reaction toward companies that support the LGBTQ+ community, finding that awareness of such backlash is low among general consumers. Of those who are aware of backlash, baby boomers tend to be the most cognizant, followed by Gen Xers.
While the full study (available by request, here) discusses all of the above components in greater detail, it also delves into other unique factors that will help brands engage the LGBTQ+ demographic. This includes how best to appeal to broad audiences through halo effects, a list of group traits of the LGBTQ+ cohort, and how specific ads fared (Bud Light, Target, etc.) among LGBTQ+ viewers vs. non-LGBTQ viewers.
GLAAD is urging Meta to take immediate action following the company’s independent Oversight Board’s decision to overturn Facebook’s original stance on a post that targeted transgender people with violent language. The case has highlighted the insidious methods used to attack LGBTQ+ individuals on social media, such as coded language and indirect messaging.
The Oversight Board’s ruling overruled Meta’s original decision to leave up a contentious post that targeted transgender individuals with violent and harmful language. The post, originating from a user in Poland, displayed a curtain in the transgender Pride flag’s colors, coupled with text in Polish implying that transgender people should die by suicide — a clear violation of Meta’s Hate Speech and Suicide and Self-Injury Community Standards, the reviewing body found.
The board ruled that Meta’s handling of the controversial Facebook post revealed significant shortcomings in its content moderation process. The company initially failed to recognize and remove the post. Despite receiving multiple user reports, Meta’s automated and human review systems overlooked the post’s harmful implications and the coded language used to target the transgender community, the board found. Those coded references or using satirical memes to bypass moderators is a tactic called “malign creativity.”
It also revealed systemic failures in Meta’s moderation practices. Despite multiple user reports, the post was only removed after the Oversight Board selected it for review.
The case demonstrated a gap in Meta’s understanding and enforcement of its guidelines against hate speech and harmful content, allowing such damaging posts to remain on the platform, potentially contributing to a hostile online environment for LGBTQ+ individuals.
In response to the Oversight Board’s ruling, Sarah Kate Ellis, president and CEO of GLAAD, appealed directly to Meta CEO Mark Zuckerberg in a press release to address this issue publicly.
“I personally want to hear Meta CEO Mark Zuckerberg tell the world, today, that his company cares about the safety, rights, and dignity of transgender people,” Ellis said.
Ellis underlined the urgent need for Meta to confront and manage the spread of anti-trans hate on its platforms.
Senior director of GLAAD’s Social Media Safety Program, Jenni Olson, emphasized the long-standing issues in Meta’s policy implementation while celebrating the board’s decision.
“This is a powerful ruling from the Oversight Board that calls upon Meta to address failures we have been articulating for many years in the annual GLAAD Social Media Safety Index,” Olson said.
The Oversight Board is an autonomous entity that reviews and renders binding verdicts on content moderation cases across Meta’s platforms, such as Facebook and Instagram. Initiated in 2020, it operates similarly to a Supreme Court within social media and comprises specialists in various fields, including human rights, free speech, government, law, and ethics. The Board possesses the power to reverse decisions made by Meta regarding content moderation.
This case is one of many instances of Meta’s shortcomings in moderating harmful content. In September, The Advocatepublished a Media Matters for America report criticizing Instagram for failing to moderate content posted by the controversial anti-trans group Gays Against Groomers. Despite clear violations of Instagram’s community guidelines against hate speech, harassment, and misinformation, the group’s content remained accessible for over a year. This inconsistency in Instagram’s enforcement of content policies, particularly content targeting marginalized communities, raised questions about Meta’s commitment to LGBTQ+ safety. The report highlighted the discrepancy between Instagram’s response and other platforms like PayPal and Google, which had taken action against Gays Against Groomers.
If you are having thoughts of suicide or are concerned that someone you know may be, resources are available to help. The 988 Suicide & Crisis Lifeline at 988 is for people of all ages and identities. Trans Lifeline, designed for transgenderor gender-nonconforming people, can be reached at (877) 565-8860. The lifeline also provides resources to help with other crises, such as domestic violence situations. The Trevor Project Lifeline, for LGBTQ+ youth (ages 24 and younger), can be reached at (866) 488-7386. Users can also access chat services at TheTrevorProject.org/Help or text START to 678678.
Erik Lundstrom was about 14 when he secretly purchased the coming-of-age novel “Rainbow Boys” and hid it in his room, waiting until he was alone to become absorbed in the story of three teenage boys coming to terms with their sexuality.
“I read that book five or six times that summer, just finally having some sort of outlet of stories about people that I identified with in a new and interesting, exciting and terrifying way,” Lundstrom told CNN.
It would be many more years until Lundstrom, now 32, would join the ranks of a small group of volunteers dedicated to creating a library packed to the brim with books written by or about LGBTQ people – metaphorically that is.
The Queer Liberation Library (QLL, pronounced “quill”) is entirely online. Since launching in October, more than 2,300 members have signed up to browse its free collection of hundreds of ebooks and audiobooks featuring LGBTQ stories, Lundstrom said.
After becoming increasingly alarmed at efforts to censor LGBTQ stories in the nation’s public schools, Kieran Hickey, the library’s founder and executive director, said they set out to create a haven for queer literature that can be accessed from anywhere in the country.
“Queer people have so many barriers to access queer literature – social, economic, and political,” Hickey said. “(For) anybody who’s on a journey of self-discovery in their sexual orientation or gender identity, finding information and going to queer spaces can be incredibly daunting. So, this is a resource that anybody in the United States can have no matter where they live.”
Until recent years, books featuring LGBTQ stories made up a small percentage of titles challenged in schools and public libraries in the US.
Between 2010 and 2019, just about 9% of unique titles challenged in libraries contained LGBTQ themes, according to data from the American Library Association, which tracks and opposes book censorship.
But books featuring the voices and experiences of LGBTQ people now make up an overwhelming proportion of books targeted for censorship – part of a broader, conservative-led movement that is limiting the rights and representation of LGBTQ Americans.
In 2021 and 2022, the ALA reported record-breaking attempts to ban books and more than 30% of the titles challenged included LGBTQ themes. And in the first eight months of 2023, more than 47% of challenges targeted LGBTQ titles, preliminary data shows.
Pulling these stories from shelves, book ban opponents argue, deprives readers of all ages of essential, affirming representation of the LGBTQ community’s lives and history.
“Fundamentally, at its core, it is discriminatory against who we are as a people and a community, and it ‘others’ our families and our stories,” said Sarah Kate Ellis, the president and CEO of GLAAD, a nonprofit LGBTQ advocacy group.
A resource like QLL, she said, could be “a wonderful gift” for those searching for LGBTQ stories, including parents looking for children’s books, a person questioning their sexuality or a heterosexual person looking to understand their peers more deeply.
Naturally, QLL carries some of the most commonly challenged books, including Maia Kobabe’s “Gender Queer” and George M. Johnson’s “All Boys Aren’t Blue.”But its volunteer librarians have also curated lists of spine-tingling queer horror, indigenous folktales and time-bending fantasy among others.
Its virtual shelves are also adorned with fixtures of the queer literary canon such as Rita Mae Brown’s “Ruby Fruit Jungle” and groundbreaking new releases like transgender actor Elliot Page’s memoir “Pageboy.”
Lundstrom, who directs the library’s steering committee, said the range of genres and identities represented in the collection reflects the vast diversity of the LGBTQ community.
“Whoever finds use from this platform for whatever reason – whether that is for vital information they need, or if it is to read a fun little romp about two men kissing – whatever that is, we want to be able to serve that,” Lundstrom said.
A library without walls
Though the QLL may not be able to provide the small joys of browsing cozy corridors of shelves and perusing time-worn book spines, its online platform offers something that can be invaluable to some readers: privacy.
Readers who are queer or hoping to explore their gender or sexuality may not be ready to explore a shelf of LGBTQ books in a public space or crack open such a book in front of family and friends, Hickey said. Being able to check out books from the privacy of one’s home or read on an unassuming tablet or laptop in public can relieve some of those feelings of anxiety and risk.
“This was a way to combat the book bans, but also to give people that sense of home and safety in their own space without having to potentially out themselves in any way,” Hickey said. “Privacy and hiding don’t have to be the same thing.”
The library can be accessed through a website called Overdrive and an app called Libby, which many public libraries use to house their digital collections. QLL’s informational website also features a “quick exit” bar at the screen, which redirects away from the site if a user needs to suddenly hide the website orleave the page.
Funded by donations and registered as a nonprofit, QLL is also not beholden to government funding or the school or library boards that typically enforce book bans. Selena Van Horn, an education professor at California State University in Fresno whose research focuses on elementary school literacy and LGBTQ literature in K-12 schools, noted that some libraries may not have robust LGBTQ collections merely because they have tight budgets or live in remote areas with limited resources.
“Not everyone has access (to LGBTQ books) purely based on their location,” Van Horn said. “So being able to access things online is a wonderful thing.”
Advocates warn bans reinforce stigma
Efforts to remove books because they contain LGBTQ characters or themes perpetuates the “othering” and exclusion that many LGBTQ people already struggle to overcome, said Ellis, the president of GLAAD.
“Children and adults are hearing that there is something wrong with them”when people propose banning their stories, Ellis said. “So that immediately … builds this culture that LGBTQ people, queer people don’t belong and should be on the margins of society.”
For children and young adults seeing stories that represent their identity can be an essential part of developing positive self-worth, said Van Horn.
“When children don’t have access to stories that represent their identities in a positive light – that show that people like them go on to do wonderful things – they can internalize those feelings and wonder, ‘Am I what they are saying I am in some negative way?’” Van Horn said.
This is especially important for LGBTQ children or youth whose families or school environments are hostile to their identities, she said.
“It’s vitally important that they have access to literature and potentially communities – even in an online space – that can support them and know that they’re beautiful, wonderful people and their identities will be valued, even if not in the space they are right now,” Van Horn said.
Already, the library has received messages from readers thrilled to see themselves reflected in the book collection, including one that was particularly meaningful to Hickey.
“There was one person who let us know that we’re their main access to library materials at this point because of where they live in a rural area,” Hickey said.
“That was a lot for me because I was just like, ‘This is the exact person I’m trying to reach. This is the exact person I want to help.’”