NEW YORK CITY — Lady Dariana, a transgender woman from Ecuador, fled life-threatening situations tied to her activism in her home country and came to the United States in 2022, seeking refuge and safety. 

“I wanted a quieter life — after receiving death threats from conservative groups associated with mafias in my country  — where I could be myself without fear,” said Lady Dariana, 33, who asked that her last name not be shared for her protection. 

However, her struggle against discrimination continued throughout her journey, as she — like the 33 other transgender and nonbinary individuals from Mexico, Ecuador, Honduras, Nicaragua, El Salvador, Panama, Venezuela, Colombia, Guatemala, Peru, Cuba, and elsewhere interviewed for this story — found themselves faced with increasingly hostile political climates. 

That hostility is newly exacerbated by a growing reliance on artificial-intelligence-fueled systems — which have already shown to have a disparate impact on minority groups — by ensuring security checkpoints and identification protocols operate using a binary definition of gender that leaves little room for those who don’t fit the criteria, experts say.

“Algorithms are not neutral,” said Liaam Winslet, an Ecuadorian-born activist with the Queens-based advocacy group Transgrediendo, which promotes health care and human rights for transgender, nonbinary, gender expansive and intersex communities in New York.  “They are created by people with biases, and those biases translate into code that then affects entire communities, especially the trans community, the queer community in general.”

Unsafe Passage

Lady Dariana’s odyssey began when she was detained at a border control facility in Reynosa, Mexico, where she was placed in a cell for men, despite her Ecuadorian ID affirming her gender as a woman, she said. Ecuadorian law has allowed gender markers to be changed to reflect gender identity since 2016.

Eventually, she was able to leave the facility, and soon arrived in the U.S. She said U.S. immigration officers also took issue with her ID, but allowed her passage into the country.

From the border, Lady Dariana said she made her way to the airport in McAllen, Texas, to fly to New York City. After passing through the TSA checkpoint’s full-body scanner, agents asked her to step aside for additional screening, she said. 

While she could not fully understand what the agents were saying, since they spoke English, she suspected they were singling her out for being transgender because agents kept pointing at the monitor where her body scan was displayed, gesturing towards the part of the image near her genital area. 

“They also asked if ‘I was carrying drugs or something,’” Lady Dariana said. “I wasn’t carrying anything, I had just left the border detention center.”   

SEEKING SAFETY: Lady Dariana came to the U.S. to flee threats against her in Ecuador. (PHOTO/Courtesy Lady Dariana)

Lady Dariana said she was pulled to the side for a pat-down by TSA agents. She said she asked to be searched by a female officer, as is allowed under TSA policy. Despite her request, two male officers were assigned to search her instead, touching her crotch and groping her chest during the search, she said. 

Such experiences are reportedly all-too-common for transgender and nonbinary travelers, who filed 5 percent of all complaints about mistreatment by TSA agents between 2016 and 2019 — despite making up an estimated 1 percent of the population, according to a ProPublica analysis.

The TSA announced that as of June 2023, it was moving to an artificial intelligence-driven, “gender neutral” new screening system touted as making travel easier for trans and nonbinary travelers. The change was heralded by advocates, including the American Civil Liberties Union, which had long raised alarms about problems with body scanning technology.

However, the changes came alongside another controversial introduction of AI technology at airports — including by Customs and Border Protection, which rolled out biometric facial matching technology to scan passengers in more than 200 airports in the U.S. Among them: all airports that have international departures

The ACLU has sued Customs and Border Protection as well as other government agencies to obtain details on their use of facial recognition technology and surveillance, saying the measures pose a threat to individual safety and privacy, particularly for marginalized communities.

“These agencies have abused or even continue to abuse surveillance authorities to spy on protesters, political opponents, Black and Brown communities, and more. Adding face recognition to their arsenal raises serious cause for alarm,” the ACLU wrote.

Coding the Binary

Increasingly, private and governmental officials are turning to AI to carry out basic daily functions. There were more than 700 use cases of the technology in federal government initiatives as of November 2023, according to AI.gov, a site established by the White House Office of Science and Technology Policy. 

Examples include Customs and Border protection using AI software to “recognize objects, people, and events in any image or video stream.” Once a detector is trained, according to the Department of Homeland Security’s AI Use Case Inventory, it can monitor streaming video in real time, to pinpoint “objects, people, and events of interest.” 

In private businesses, smartphone makers including Apple and Samsung allow users to unlock their phone using AI facial recognition technology, while banks such as Bank of America use facial recognition to let customers log into their mobile apps. Amazon, through Amazon Go, has a cashier-free shopping service using facial recognition. 

The White House’s AI.gov promises that “The federal government is also establishing strong guardrails to ensure its use of AI keeps people safe and doesn’t violate their rights.”

‘Reinforcing Stereotypes’

Research shows that AI facial recognition technology has a high success rate of correctly identifying cisgender individuals — those who identify with the gender they were assigned at birth — with a 98.3% success rate for cisgender women and a 97.6% rate of identifying cisgender men. But the fail rate was 38% when identifying transgender individuals, researchers at the University of Colorado at Boulder found in a 2019 study. In addition, systems built on AI have exceptionally low rates of recognition for nonbinary people or other gender categories, the researchers found.

“These systems run the risk of reinforcing stereotypes of what you should look like if you want to be recognized as a man or a woman. And that impacts everyone,” said Morgan Klaus Scheuerman, one of the UC Boulder researchers. 

Scheuerman noted he was misidentified as female by the algorithm, apparently because he has long hair. He and his fellow researchers, Jacob Paul and Jed Brubaker, say technology has not evolved at the same pace as modern understandings of gender. Adapting algorithms to handle a broader spectrum of identities would require a more complex and gender-diverse coding approach, they said. 

“I am not afraid of them, I have fought and will continue to fight. I’ve been a survivor of the system since I was born.” — Josemith Gómez

Training an AI model consists of taking a considerable amount of data, inputting it into a computer, and observing how the computer creates a model.  But trans and nonbinary people often fall outside of these algorithmic models, which focus on a binary understanding of gender. 

When artificial intelligence algorithms are trained with data sets that follow the binary approach to encoding, the ability to recognize diverse gender identities such as transgender or nonbinary people is compromised, experts say. 

“When we are thinking about how A.I. discriminates, we have to think about how machine learning systems are built,” said Meredith Broussard, a data journalist and professor at New York University who focuses on technology bias. 

The mathematical patterns in the data are the same mathematical patterns in the world. Which is to say there are patterns of discrimination,” added Broussard, author of the books “More Than a Glitch” and “Artificial Unintelligence.” “So any kind of discrimination that exists in the world or has existed historically is likely going to be reflected in the data that’s used to train the system up.”

Growing Hostility

There were an estimated 1.6 million transgender youth and adults in the U.S. in 2022 — a small fraction of the 336 million population, according to the Williams Institute

As of late February 2024, close to 500 bills targeting transgender children and adults — including efforts to ban medical care, to ban changing to legal documentation to be consistent with one’s gender and to ban trans people from bathrooms consistent with their gender — have been introduced in 41 states, according to the Trans Legislation Tracker, an independent compilation of state-level legislation. 

The bills follow last year’s uptick in anti-trans legislation, which saw three times as many measures introduced as the previous record, the group found.

At least 32 transgender people were slain in the U.S. in 2023, compared with 41 violent deaths recorded in 2022, 59 in 2021, and 45 in 2020, the Human Rights Campaign estimated. Most victims were trans people of color, the HRC found. The group says the figures are likely an undercount because many murdered trans people are not properly identified in media or police reports. 

According to an FBI report, 469 hate crimes against trans people were committed in the U.S. in the last year.

Compounding Existing Biases  

Trans people and activists fear that AI will worsen the hostility they already encounter from members of law enforcement. 

“For the police, we are like trash,” said Kenia Orellana, a transgender woman from Honduras. Orellana said she suffered discrimination and violence from law enforcement officers not only in Honduras, but also in the U.S. “Just because of the way I look, they stop me on the street. They ask me what I’m doing at night, or if I’m carrying drugs.”

Between 2013 and 2022, 15 transgender people died in the U.S. at the hands of law enforcement officials while in jail or prison, or while being held in Immigration Customs Enforcement detention facilities, according to Human Rights Watch. 

“With or without artificial intelligence, I don’t want to be watched, to be tracked, to feel like they are watching me, but it’s something they have always done,” said Josemith Gómez, a trans woman. (PHOTO/Courtesy Josemith Gómez)

Josemith Gómez, 22, was born in Venezuela, and said her family threw her out as a teenager for being transgender. Of the 34 trans women interviewed for this story, 31 said they had been rejected by their families. 

Gómez came to the U.S. in 2022, and has felt safer since arriving in New York, where she now lives in Queens. Still, discrimination persists, she said. Less than a year ago, she said she was in an abusive relationship with a man in the Bronx, when he threatened to call Immigration and Customs Enforcement and have her deported. 

Instead, she called the police, but said officers did nothing to intervene when the man threw her belongings out of a sixth-floor window.

“The police just told me to pick up my things and leave,” she said. 

To this day, Gómez wonders if the police did nothing because “I’m a woman, because I’m a transgender woman, because I’m an immigrant or because I’m Hispanic. I don’t know, and maybe I never will.”

Facial Recognition Dangers

Many of the 34 trans individuals interviewed for this story said they’ve experienced negative initial encounters with facial recognition software. 

Erika Lopez, 33, a transgender woman who came to New York from Mexico, said she has been shut out of her banking app more than once due to AI facial recognition software malfunctions.

“After making my gender transition, my appearance has changed significantly, but the information [stored in the system] is still based on how I looked before,” Lopez told the NYCity News Service, “The app sometimes fails to recognize me correctly and prevents me from accessing my account. This happens all the time… it is very frustrating, especially when I am trying to perform important transactions at the bank, especially if someone is watching me.”

Kenia Orellana said she has also been shut out of her Apple iPhone after she elected to use biometric identification as a way to unlock it.

“My phone sometimes fails to recognize me correctly, especially if my appearance changes due to makeup or hairstyle,” Orellana said. “I depend on my phone for many daily activities, and I worry about being blocked. It has happened to me several times.”

Some in the trans community are concerned that such problems with apps and phones are harbingers of even more serious forms of bias.  AI has already shown to have a disparate impact on minority groups — manifesting in everything from racial discrimination  in resume screening to disparate access to insurance rates to issues with using facial recognition technology for surveillance. 

“Algorithms are not neutral.” — Liaam Winslet, Transgrediendo

“I think the growth of AI discrimination demonstrates that we’re not doing enough because we continue to see more products come to market that harm the public through the same biased, invasive, error-prone algorithms to make life-altering decisions,” said Albert Fox Cahn, Executive Director and Founder of the Surveillance Technology Oversight Project (STOP). “And I think we know that the risk is most acute for those who have often been most systematically marginalized by large institutions and governments, including trans people.”

For example, activists have expressed concerns about police joining the annual NYC Pride march, going as far as to ban the NYPD and other law enforcement from participating in uniform through 2025. Activists say that AI could give officials a dangerous tool to identify those who attend LGBTQ events and criminalize them.

“I think facial recognition is really concerning because it means that with one photo, you can potentially identify thousands of people,¨ Fox Cahn said.

Multiple studies, including by independent tech advocates such as Timnit Gebru’s Distributed AI Research Institute, and government experts like the National Institute of Standards and Technology (NIST), have investigated examples of bias in AI technologies.

“Bias is neither new nor unique to AI and it is not possible to achieve zero risk of bias in an AI system,” NIST researchers wrote in a 2022 publication, “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence.“

‘The Scale of Damage’

The NIST report warned that the scope of bias potentially perpetuated by AI is more of a threat than individual human bias, because it can happen without transparency or accountability — and at a massive scale.

“Harmful impacts stemming from AI are not just at the individual or enterprise level, but are able to ripple into the broader society. The scale of damage, and the speed at which it can be perpetrated by AI applications or through the extension of large machine learning models across domains and industries requires concerted effort,” the report’s authors wrote.

After the U.S. Capitol attack on Jan. 6, 2021, law enforcement agencies turned to Clearview AI, a private company whose controversial AI-driven facial recognition software harvests millions of photos from social media, police mugshots and other sources. 

More than 600 law enforcement agencies use the company’s app — including the FBI, Immigration Customs Enforcement and the Department of Homeland Security. Searches employing the app spiked 26% in the days following the Capitol attacks, according to the company’s CEO, Hoan Ton-That. 

Ton-That told NYCity News Service in a statement that the company aims to eliminate any bias in its algorithm. Ton-That added that Clearview AI’s algorithm achieved 99.85% accuracy in matching the correct face from a collection of 12 million photos, across all demographics.

Hon-That did not address how the software performs with regard to identifying transgender and nonbinary individuals, or what actions the company is taking on reducing bias in gender identification. 

Standards for the Future

Advocates say the secrecy with which officials are integrating AI into daily life should be a concern for everyone.

Winslet, the Transgrediendo activist, is concerned about intensifying surveillance of minority groups, especially the African-American, Latinx and transgender communities.

“We are seeing a clear discriminatory approach by law enforcement, especially with the implementation of robots equipped with cameras in specific areas of New York,” Winslet said, referring to a robot temporarily assigned to patrol the Times Square subway station and a set of “digidog” robots that have been used by the NYPD since 2020.  

Bamby Salcedo, a nationally recognized trans leader who has visited the White House, said it’s paramount that trans people and other minorities are actively engaged in the development of AI. That could include working groups, hiring trans and minority developers and testing new software with an eye on eliminating bias. 

WHITE HOUSE: Trans activist Bamby Salcedo (R) poses with President Joe Biden during a visit to the White House. (PHOTO/Courtesy White House)

Salcedo and others interviewed also believe that protecting trans rights in the context of artificial intelligence and surveillance will require government transparency and clear regulations.

Josemith Gómez said no matter what happens, the future will likely require the same resistance and endurance that trans people have used to make it this far.

“With or without artificial intelligence, I don’t want to be watched, to be tracked, to feel like they are watching me, but it’s something they have always done,” Gómez says. 

Still, she reflects, “I am not afraid of them, I have fought and will continue to fight. I’ve been a survivor of the system since I was born.”