Equity by Design?

The promises—and potential perils—of algorithmic healthcare

In honor of Pride, all of the Well Beings News reporting (this month only) will be available to both paid and free subscribers. My apologies for this one publishing a day late—our newsletter host, beehiiv, had some technically difficulties this week. I promise it was worth the wait!

As artificial intelligence tools flood the healthcare landscape, promises of streamlined care and democratized access abound. But for LGBTQ+ patients the stakes are uniquely high. This feature dives into the tension between innovation and inclusion, exploring how AI systems inherit the biases of their training data, pose privacy risks, and exacerbate existing inequities. With insights from healthcare professionals, technologists, and advocates, we ask: who is AI really serving—and at what cost?

Imagine this: you’re a queer trans person filling out yet another intake form, this time at a clinic that touts its cutting-edge AI system. As you toggle between dropdown menus that don’t quite fit, you wonder what the algorithm will do with the incomplete version of your identity you’re forced to submit. You know that AI models often underperform for people whose identities fall outside the majority datasets they were trained on. You worry the system won’t just overlook you, but that it could make life-changing decisions about your healthcare needs, based on a profile of someone you’re not.

A recent JAMA Network Open Study found that people who struggle with their health have significantly less-optimistic views of the competence of healthcare AI software. And in 2023, a Sanofi poll found that LGBTQ+ people had overall less trust in the healthcare system in general. This is unsurprising, given what we know about the prevalence of discrimination and bias experienced by both LGBTQ+ and disabled or chronically ill people within healthcare systems.

WHY UPGRADE

Paid subscribers get…

  • access to the research section of this Monday Roundup newsletter

  • original reporting every Wednesday

  • interviews with scientists, researchers, health and wellness professionals, and LGBTQ+ change-makers every Friday

  • all downloadables in the Resource Library

Upgrade now for full access!

New AI healthcare tools are being developed every day, and the pressure is on for people working in health and wellness industries to adopt the cool new tech promising to democratize care and solve myriad problems. “But do you really need this new tool?” Dr. Anmol Agarwal, the AI security and privacy expert who founded Alora Tech asks. “Is it actually going to help you? Or are you just trying to stay up to date with the latest trends? Sometimes AI is actually a lot more expensive and more difficult to implement than a traditional method,” she cautions, “So do a pros and cons analysis.”

Separating Hype from Help

AI is everywhere, with new tools flooding the market, promising to streamline intake, personalize care, and catch diagnostic signals that even seasoned clinicians might miss. Many so-called AI tools, some of them coded by an AI themselves, are actually rules-based systems, glorified data pipelines, or simple automation scripts with no machine learning involved. The hype often outpaces the reality of what a tool can deliver, leaving providers and wellness professionals chasing software that may not do what it says on the label, that might have serious privacy concerns which could alienate patients, and that they may not need in the first place. 

“With AI, it’s like every day there’s a new buzzword or acronym. People get carried away by those buzzwords rather than understanding what the technology actually does,” says Agarwal. “Everyone's talking about AI, and they promise things like it will make you much more efficient, it will make you faster, more productive.” But even the most seasoned professional can fall into magical thinking where new technology is concerned.

“There are a lot of companies using AI, or who say they’re using AI, but they have no idea what the AI is for. They’re just throwing the name in there to attract capital, or to attract people who want to be part of something new and cool and exciting,” says David Weiss, Chief Strategic Officer & General Council at Cognivia, a company pioneering the use of machine learning in healthcare and research. “You need to understand, what is your AI for?” The issues here are multifaceted, especially if you are using third-party systems that you don’t fully understand.

Gender isn’t binary — should your healthcare provider be?

First, any AI tool, no matter how well-designed, will be limited by its training data, which introduces the possibility of overlooking or under-serving people who belong to already-marginalized population, who may not be well-represented in that data. In healthcare settings especially, where it is important to foster and maintain confidentiality and trust between professionals and patients, there may also be questions about the privacy of data being entered into AI systems. And finally, many of these new systems and software are costly. Before spending the financial and labor resources necessary to integrate a new tool into patient care, AI or not, it’s worth asking: does it solve a problem I actually have?

Missing from the Model

It’s with this question in mind that Weiss approaches developing machine learning, using a “science first” model that seeks to define the problem that needs addressing first before seeking out AI solutions. One thing that AI can be particularly good at, with adequate information, is analysis, taking large collections of data and finding patterns that can help with the allocation of your limited resources.

At Cognivia, Weiss and his team have focused a great deal of their efforts on improving outcomes of clinical trials by reducing patient drop-out rate and improving adherence to trial protocols. Their AI systems use patient questionnaires to identify people likely to need more support in these areas, so researchers can better allocate their time and efforts toward participants who are struggling before they stop participating.

One of Cognivia’s hopes is to improve diversity among people involved in clinical trials. “There's still a lot of work to be done,” Weiss tells me, where inclusion is concerned, especially for sexual and gender minorities like queer and trans folks. It’s a well-known problem that clinical trials often exclude people who may experience something that researchers worry will confound the data. This often went so far as to exclude all women from clinical research, out of concern that a menstrual cycle will impact results, or that the test could harm a potential pregnancy. It also commonly means excluding anyone with a “managed disease” like HIV, or anyone taking HRT medication.

Because previous clinical research has been so limited, this often means that the data on which clinical AI has been trained are also limited, which poses a bigger problem for minority populations than others. “AI is only as good as the data it’s been trained off,” says Oscar Buckley of Blumefield, a digital marketing and software development firm. “Training data often lacks diverse representation, leading to misdiagnosis or harmful assumptions.” 

This is often a compounded issue for queer and trans folks, many of whom feel as if they can’t reveal everything about themselves to health professionals, who have to silo their medical needs between various providers in order to protect their access to care. This is a particular concern with the use of Large Language Models (LLMs) like ChatGPT in medical care. These systems are notoriously bad at following direction, which could easily lead to unintentional privacy breaches, where the machine shares more details about a patient than is medically necessary or appropriate. Further, LLMs have a bad habit of “hallucinating” information, which means using one as a tool to generate appointment notes could easily result in the machine attempting to “fill in the gaps” of missing information with its own assumptions.

Transparency is Trust

The worries that LGBTQ+ people have about the privacy of their medical data will only be exacerbated by AI, especially when the terms of use aren’t clear. First and foremost, if you intend to implement an AI tool in your practice, you need to be clear on how it works, to understand where the data is being held, how it’s being anonymized, and how it’s being used.

“AI is only as secure as the database it works off of, and databases nowadays are not unhackable,” Buckley warns. There are various ways that data used in machine learning can be made more secure, like using Federated Learning and differential privacy techniques. “[Federated learning] allows AI models to be trained on patient data locally, without transferring sensitive data to central servers,” Buckley explains, while differential privacy techniques are designed to strip certain types of data from the dataset, so it doesn’t point back to specific patients.

In a political climate where trans people in particular fear targeting from government officials attempting to identify them via public data, being able to explain to your patients how their data will be protected, and how you will avoid it being used to target them is absolutely vital for maintaining patient trust. Transparency is the best way to combat this. “If you have a [Terms & Conditions notification] that’s three pages long, no one’s going to read that,” Buckley tells me. And if those terms and conditions are inaccessible, people are likely to be suspicious. “A good way to combat that and create trust is to have a short for, something that someone can see without 30 seconds, maybe a video which sums up the Ts & Cs really well.” 

This means you have to understand the details yourself, especially to make sure that you are complying with any relevant state or federal regulations relevant to your work. “In a perfect world, I would wish for all patients to be clearly informed of how their data will be shared. All patients who enter a healthcare setting should be considered vulnerable and thus worthy of all the information they need to protect their own privacy,” says Kian Xie, a Healthcare Analytics Manager who has guided healthcare data and analytics management across multiple healthcare domains including inpatient and outpatient care, research, pharmacy, Health Information Exchange, and life sciences.

“Patients should be informed clearly and explicitly of how their data will be altered, mapped, and standardized. The patient should ideally have the ability to confirm or correct fields like gender identity, though legal sex may be constrained by insurance or ID requirements: a simple, direct conversation where the registrar or provider informs the patient of what they're legally obligated to report, what is the patient's right to determine or approve, and what potential issues there may be based on what gender the patient is listed as on their health insurance.” 

Implementing these processes may change how providers feel about the efficiency of new AI tools, and may restrict how you are able to implement them, impacting the efficacy of those tools. “If the resources don't exist to build these conversations into the workflow,” Xie advises, “we must err on the side of omission, wherever not legally mandated, to prioritize patient safety. Under no conditions should an assumption be documented about a patient's sexual orientation or transgender status.”

Sharp Tools, Blunt Impact

“In USA healthcare, progress is always a battle,”says Xie. “On one side, you have limited resources, employees' needs for job security, and the needs for safety and regulatory compliance. All of those things hold up forward movement. On the other side, you have executives who want to cut costs and see AI as a potential way to make the healthcare business financially feasible. And you have the people in the tech business who are passionate about getting new innovations to the market, but are naive about what it actually takes for them to be accepted.”

This makes proactive change slow, but as Xie points out, this doesn’t mean change is impossible. “The pandemic showed us that in times of crisis, that resistance can be broken. The face-to-face interaction between provider and patient used to be considered ‘sacrosanct’, where the idea of it getting replaced by telehealth was unthinkable. Covid challenged that, and telehealth has now become widespread.”

Where do the ear tips even go??

As healthcare systems face increasing financial strain, this means that some big changes are likely coming for those working on the frontlines. And given the political climate in the US, in which any kind of regulation of AI is being strongly discouraged, that change could come quite quickly. “Leaders are always looking for ways to cut costs, and the biggest cost is people,” says Xie. “We've seen evidence that many functions of healthcare professionals, including MDs, could feasibly be replaced by AI. As more regulatory approvals are granted for AI to play a role in clinical care, we will be in for some big decisions and sudden changes. For those who operate in ‘firefighting’ mode on a daily basis, it will seem like these changes are coming out of nowhere. But the pressure has been on for a long time.”

This is likely to lead to problems we can’t even quite fully conceive of yet. “There is a lot of nuance in healthcare,” Xie says. “AI can help suggest questions and queries for data discovery, and help cover some bases in the requirements phase, but it still can't fully take the place of a human analyst, especially one who is experienced enough with healthcare to look out for the nuances and inconsistencies.” 

Unfortunately, this isn’t always reflected in public perception. A recent study published in the New England Journal of Medicine found that participants regularly rated AI medical advice which had been rated by physicians as “high quality” as more valid, reliable and trustworthy than advice directly from doctors. In fact, it found that doctors' responses were rated only as trustworthy, or even less so, compared to misleading and incorrect AI advice.

AI isn’t just coming to healthcare—it’s already here, and the systems built today will shape the future of care for generations. But that future is not set in stone. It will be written by the people who decide what gets measured, who gets represented, and which risks are deemed acceptable. If used with intention, transparency, and respect for lived experience, AI could help fill gaps in care, reduce burdens on overworked providers, and surface insights that improve lives. But that will require that healthcare professionals understand the tools they’re using, demand better data, and refuse to compromise on patient trust. For queer and trans patients already navigating a system that often marginalizes or misunderstands them, these decisions aren’t just technical—they’re deeply personal. Providers must push beyond convenience and hype to ask hard questions about equity, privacy, and accountability, before handing the reins to a machine.

Reply

or to participate.