On Saturday, April 4, a new study landed in Acta Paediatrica. By Sunday morning, the "Society for Evidence-Based Gender Medicine" (SEGM) — a known hub of the "anti-LGBTQ+ pseudoscience network" according to the Southern Poverty Law Center — had posted about it on X. By Monday, it had 446,000 views. Conservative news outlets like The Federalist have already published on the study entirely without criticism, and it quickly made its way into the hands of centrist and anti-trans liberal journalists like Matt Yglesias and Benjamin Ryan.

It hasn't been covered by the New York Times or the Washington Post... yet. But it will be. And when it is, they will undoubtedly use their reputation among liberals to launder this latest installment of the anti-trans agenda into something more respectable. SEGM's own 2024 year-in-review boasts that New York Times columnist Pamela Paul described them as "one of the most reliable nonpartisan organizations dedicated to the field" of anti-trans lobbying. We know what's happening.

"Every time one of these studies drops, it gets laundered through policy bodies and court filings as settled science. It isn't," writes Alejandra Caraballo, a clinical law professor who provided a thorough analysis of the study on Bluesky, which forms the basis for much of what follows. (Be sure to subscribe to her journalism at The Dissident.)

In this installment of Rx Resist, I'll walk you through this pipeline, so we can start to learn how to analyze both science journalism and its underlying studies, to explore potential political bias, to demonstrate how this bias finds its way into mainstream media — how a politically motivated paper moves from a right-wing network through centrist amplifiers toward mainstream coverage.

Normally, this is where I would drop a paywall break! Today, you are getting this story for free, so be sure to subscribe and upgrade to paid to access the rest of this skill-building series and the media literacy course at The Well, launching in June!

First, let's break down everything wrong with this study; then we'll look at how it has progressed from funding through publication to the feeds of right-wing, centrist, and liberal media members.

The study misrepresents its own results

Caraballo's analysis identifies three distinct ways the data was misrepresented.

1) They're not measuring what they claim. The headline metric — contact with specialist psychiatric services — is a proxy, not a direct measure of morbidity. Kids in gender clinics are under intensive clinical surveillance. Controls are not. Caraballo: "You are measuring detection, not disease. This is epidemiology 101."

2) The comparison groups are rigged by design. In Finland, severe psychiatric morbidity disqualifies patients from gender-affirming care. The treatment group was pre-selected for low baseline psychiatric contact. Any apparent increase afterward is at least partly an artifact of that selection and the subsequent monitoring. The dramatic numbers hide this comparison entirely.

3) The authors buried their own exculpatory finding. Once you control for prior psychiatric treatment, care status has no independent effect on outcomes. That finding is in the paper, in Tables 4 and 5. It is not in the authors' conclusions. Caraballo: "The authors intentionally chose the opposite frame to mislead people."

What the data actually supports. Trans youth carry an elevated psychiatric burden regardless of treatment pathway. That's an argument for integrated care, not against affirming care.

And that's how the science gets made

"Conflict of interest" is one of many factors to consider when looking at what biases researchers are bringing with them into a study: what assumptions, incentives, and institutional pressures did this researcher bring to the work before they collected a single data point? Funding sources, organizational affiliations, prior published positions, history of speaking engagements, and other public relationships all shape which questions get asked, how studies are designed, and which findings are foregrounded.

The senior author of this paper, Dr. Riittakerttu Kaltiala, is a SEGM keynote speaker. She also presented findings to the Observatoire la Petite Sirène — a French psychoanalyst collective that lobbied to exclude gender identity from France's conversion therapy ban and co-organized a 2024 conference with SEGM — and CAN-SG, a British-Irish clinician group whose videos YouTube has designated as conversion therapy content.

She served on the Cass Review advisory board, another publication that has misrepresented data. She testified before the DeSantis-appointed Florida Board of Medicine to ban gender-affirming care for minors. She is a regular at Genspect, which opposes gender-affirming care for people under 25. She has previously published highly criticized and politicized work on gender-affirming care, which she also presented to SEGM and which was used by the organization to push its agenda.

Speaking of their agenda, SEGM's money runs through donor-advised funds shared with the Heritage Foundation, the Alliance Defending Freedom, and the Family Research Council — all Project 2025 signatories. Donors Trust, described as the right wing's ATM for dark money in Mother Jones, gave tens of millions to these various hate groups, right-wing think tanks, and lobbyist organizations, including SEGM.

Kaltiala has also supported and defended practices that should put her medical expertise into question, if not putting her license at risk. Former patients at her Tampere clinic have described years of "assessment" with no therapeutic support, repeated misgendering, and one patient claims to have been assigned masturbation as homework by a nurse there. (Kaltiala has defended these interactions.) Erin Reed has documented her conversion therapy ties and clinic practices in detail.

"Peer-reviewed" is not a quality guarantee

Acta Paediatrica is a legitimate journal — it was founded in 1921 and is published by Wiley-Blackwell at the Karolinska Institute. Unfortunately, peer review is not a guarantee that research is well-designed, well-executed, and well-analyzed. In this case, it failed to catch what Caraballo identified in a single afternoon.

We know that peer reviewers often miss mistakes thanks to a former BMJ editor who tested peer review by inserting deliberate errors into papers and sending them to reviewers. No single reviewer caught all the errors. Most caught about a quarter. He concluded that peer review is "almost useless for detecting fraud" and that it works primarily on trust.

Peer review functions primarily as a credibility signifier rather than a quality guarantee. This raises an issue I've discussed before here: the meaning of "quality data" in scientific inquiry. To confuse the discourse, conservatives often conflate two distinct issues: studies like this one that are poorly-designed, poorly-executed, poorly-analyzed, and poorly-summarized with a specific agenda in mind; and studies like I discussed with Dr. Luke Allen in my Peer Reviewed column about his work, where a term like "low quality data" is not a judgement of his work, but a description of how reliably his results could be used to predict future outcomes for other people.

In the former instance, with Kaltiala's work, the data can be interpreted to show the exact opposite of what she claims it demonstrates. In the case of Allen's work, his interpretation is sound, suggesting that his study should be replicated on a larger scale to improve its predictive quality. These are not comparable critiques, though the right would have you believe they are.

How it gets laundered

The network that produces this research is the same network that distributes it. Within hours of publication, SEGM and other connected anti-trans groups have posted it to their social media. Within 48 hours, these posts have collectively received at least half a million views, and from there the spread continues.

The centrist gateway. Matt Yglesias shared the SEGM post almost immediately and without any criticism, moving it from partisan media into the mainstream consideration set. He's not a scientist. He didn't evaluate the methodology. He shared a post from a designated hate group, and on it goes.

The credibility bridge. Benjamin Ryan — an independent journalist who has been widely criticized for his anti-trans writing and who contributes to the New York Times, The Guardian, and NBC News — published a piece on his Substack that largely presents the study's conclusions at face value. He noted the selection bias in one sentence. He did not mention Tables 4 and 5, which explicitly counter the analysis being presented to the public. He also has a prior relationship with Kaltiala. Here, the study gains the veneer of respectable science journalism before it reaches major editors.

Why this works. A new study published in Political Communication argues that disinformation doesn't travel as discrete incorrect facts. It travels as an emotionally resonant narrative. The stories that spread are the ones that affirm what an audience already believes and feels. The paper found that journalists are much more likely to repeat disinformation uncritically when it coheres with preexisting frames that resonate with their audience. The frame here is 'maybe we moved too fast on trans youth care,' all the while moving at a literal snail's pace, doing everything possible to prevent trans youth from actually getting that care.

This playbook has run before. SEGM funds open-access fees for journal articles and commission systematic reviews. Between 2021 and 2024, they paid McMaster University's Gordon Guyatt $250,000 to conduct systematic reviews, and they chose the research questions. After the reviews were published in 2025, Guyatt wrote to every outlet that cited them, saying they were misconstruing his work. Each refused to publish his letters. His team later called the use of reviews to deny care "a gross misuse of our work and is unconscionable."

What to do with this

When a new study comes across your desk — whether on social media, in mainstream news, or brought up by a patient or colleague — ask these questions before engaging with the findings.

Check who funded it and who's amplifying it. Conflicts of interest are disclosed in every published paper. Read them. Look at who immediately launches into praise for the work. The network distributing a study sometimes has a hand in its creation as well.

Ask what the study actually measured. Headline claims and outcome variables are frequently not the same thing. Identify what was measured before accepting what was concluded.

Read past the abstract. Authors' conclusions don't always reflect their own data. The full results section sometimes tells a different story. Get comfortable in the weeds.

Translate the findings accurately. Ask what the data actually supports — which is often more nuanced, and sometimes the opposite of what's being claimed.

Distinguish design failure from data limitations. A study can be poorly designed, poorly executed, and politically motivated — or it can be rigorously conducted with transparently acknowledged limitations. These are not the same critique, even when critics treat them as interchangeable.

Stay ahead of the coverage cycle. By the time a study reaches mainstream media, it has often already been framed for the general public. Educating yourself on the full context gives you a heads-up in countering harmful talking points that rely on that framing.

Knowing how this pipeline works isn't just professional due diligence; it's part of what it means to show up for patients in this moment. The disinformation doesn't stop at the lobbyist, the researcher, or the media covering them — it walks into your office. This column exists to make sure you're ready when it does.

Reply

Avatar

or to participate

Keep Reading