Compassion in Ethical Policy Research

R. J. Briggs  |  Alliance for Policy Research  |  May 14, 2026

What does an ethical approach to policy research look like? Most researchers I know would answer with some version of the same framework: do no harm, get informed consent, treat people as ends rather than means. That framework — call it the Kantian one — is the bedrock of modern research ethics, and it should stay in force. But it is not enough on its own. This post lays out a case for layering a deeper commitment underneath it: a welfare commitment, oriented by compassion, that takes the constraints on people’s capabilities as morally serious in their own right. The case starts with a clinical-trial controversy from the 1990s, moves through the philosophical scaffolding, and ends with what this asks of a research organization in practice.

A clinical trial that should not have run

In the mid-1990s, researchers from the United States and France ran a series of clinical trials in sub-Saharan Africa and Southeast Asia to test whether a short course of AZT could prevent mothers from transmitting HIV to their babies during birth. The trials used a placebo design: some women received the experimental drug, others received nothing. The science was rigorous. The participants gave informed consent. And in 1997, the New England Journal of Medicine called the trials unethical.

The objection was not that the participants had been deceived or coerced. The objection was that a placebo arm should never have been on the table. In wealthy countries, an effective treatment regimen already existed. Pregnant women with HIV in New York or Paris received a long course of AZT and rarely transmitted the virus to their babies. The trials in Africa and Thailand were comparing a short course of AZT against nothing, because nothing was the local alternative. The participants consented. The local alternative was nothing because nothing was what their countries could afford. The methodology was clean.

This is the hardest kind of ethical problem in research, and it does not stay in medicine. It shows up wherever researchers in well-resourced institutions design studies of populations whose capabilities are constrained by conditions the researchers do not share. It shows up in policy research constantly. The question the AZT controversy forced into the open was whether informed consent is enough to make a study ethical when the participants are choosing between a research arm and nothing. The answer the field eventually reached was no.

That answer is uncomfortable, because the natural framework most of us bring to research ethics says yes. Do no harm. Respect autonomy. Get consent. Treat persons as ends in themselves and never merely as means. This is the framework Immanuel Kant gave us in the eighteenth century, refined into its modern form as the Formula of Humanity, and it has done enormous good in the world. It underlies the Hippocratic oath, the Nuremberg Code, every informed-consent form a participant has ever signed. The Alliance for Policy Research, where I do my work, takes it seriously as a starting point.

But the AZT trials show that the Kantian framework is not enough. The participants consented. The harm, if there was harm, did not come from treating individual participants as means. It came from designing a study whose structure took advantage of the fact that the local alternative for poor participants was nothing. The Kantian question — did each participant agree? — gave the trials a green light. A different question — should researchers be designing trials in which the only choice available to poor participants is the research arm? — gave them a red one. The second question is the one international research ethics eventually adopted. The Declaration of Helsinki was revised in 2000 to require that placebo controls be used only when no proven intervention exists, and AZT trials of that kind have not been run since.

Compassion as perception

The question that overrode informed consent was a welfare question. It asked not whether each participant had been treated as an end in themselves, but whether the design of the research was acceptable given who the participants were and what their lives could afford. It treated the constraints on participants’ capabilities as morally serious in their own right, not just as a backdrop against which consent was extracted. The philosopher Martha Nussbaum has a useful name for this kind of moral attention. She calls it compassion, and she means something specific by the word: the act of seeing that a particular person’s capabilities have been cut down, and recognizing the loss as morally serious. Compassion in this sense is perception, not sentiment. If you cannot see the actual constraints a community is living under, you have missed something real about them. That has nothing to do with feelings.

The distinction matters because empathy, the close cousin of compassion in everyday usage, fails policy work in a specific and well-documented way. Paul Bloom has argued, drawing on a substantial empirical literature, that empathy is parochial and innumerate. It attaches preferentially to people who are like us, near us, or otherwise vivid to us. It does not scale: people will donate more to help one named child than to help eight. Stalin’s old line — the death of one person is a tragedy, the death of a million is a statistic — captures the same problem. Empathy is real, and it does moral work in small-group life, but it is the wrong instrument for designing studies whose effects fall on populations rather than on individuals one can picture.

Compassion-as-perception is what was missing from the original trial designs. The researchers saw participants who could consent. They did not see the structural fact that the participants had nothing else to consent to. The Kantian framework let them stop looking. The welfare framework would have made them keep looking.

None of this means the Kantian framework should be discarded. The Formula of Humanity is a useful constraint, and any serious research ethics holds onto it. What it means is that the Kantian framework is a side-constraint, not a foundation. Underneath it, doing the deeper work, is a welfare commitment in the lineage Amartya Sen developed: priority to those whose capabilities have been most constrained, transparency about the welfare function in use, and the recognition that choosing one over another is itself a normative commitment. When the Kantian constraint and the welfare commitment pull in different directions, the welfare commitment is the deeper one. The Kantian constraint protects us from violating individual persons. The welfare commitment protects us from designing studies that violate populations, even with the consent of every individual involved.

A more optimistic reading of this case is also available. Innovation often resolves what looks like an irreducible trade-off. The Thailand short-course trial eventually delivered a methodology that was both more rigorous and more ethical, and the world is better for it. Researchers who hold that trade-offs will eventually dissolve, given enough ingenuity and time, are right often enough to take seriously.

But trusting in eventual dissolution is itself a working principle, and it has costs. It tells researchers nothing about what to do in the meantime, while the dissolution is still pending. The placebo participants are still on placebo. The trial is still being run. An organization needs principles it can act on today, and the principles it acts on while waiting for innovation matter as much as the principles it adopts once the innovation arrives.

This is why substantive positions, taken openly and in advance, are protective. They let a research organization say no before the pressure of a specific contract, a specific funder, or a specific cleaner-than-clean methodology makes saying no harder. They reduce the negotiating surface. An organization without a stated position has to argue its way to a no every time. An organization with a stated position has only to enforce one.

Three practices

For a policy research organization, this commitment translates into three concrete practices. The first is independence not just from external pressure but from internal mood. The standard story of research independence focuses on the obvious failure mode: a funder or client demands a particular conclusion, and the researchers feel they must comply or risk losing the client. That happens, and the usual protections work against it. A subtler failure happens inside any large research institution, where certain conclusions become risky to state — not because anyone has been told to avoid them, but because the institutional mood has made them awkward. Climate change discussed in carefully calibrated language. Gender approached with growing circumspection. Equity embraced or avoided depending on the audience. Nothing directed; everything learned. A research organization that does not commit, explicitly and in writing, to recognizing settled science even when allies dislike the finding will eventually drift toward whatever the institutional mood permits.

The second practice is naming, openly and in advance, the kinds of engagements the organization will not take. No researcher at APR would work on a contract to optimize ICE detention facilities. The methodology is not the problem. The use of the findings is. The honest move is to say so out loud, before the contract is offered, rather than negotiate the refusal case by case under the pressure of a specific funder. This is not advocacy. It is clarity about what kind of research we are in business to do.

The third practice is recognizing that the pillars of research quality — integrity, transparency, inclusivity, independence, usefulness — pull against each other in practice, even when they look harmonious on paper. Inclusivity and independence pull against each other when communities and evidence disagree. Usefulness and integrity pull against each other when sponsors need certainty and the evidence supports only probability. Acknowledging the trade-offs openly is the first step toward resolving them honestly. The north star, when the pillars pull apart, is compassion: perception of where capability has been constrained, and priority to the people for whom that constraint is heaviest.

What needs careful thinking

A few questions remain open, and it would be dishonest to suggest the framework above sorts them all out.

The first is what counts as settled science. In biomedicine and exposure science the answer is often clear: replicable methods, peer-reviewed consensus, known error rates. In social science and policy research the answer is harder. Consider the minimum wage. Within the empirical labor-economics field, the consensus on moderate increases — that disemployment effects are small or near zero — is about as well-established as social-science findings get. In the public policy debate, the question is treated as wide open. A research organization committed to recognizing settled science has to decide, case by case, when to defer to the field and when to acknowledge real dissent. The smoking-gun reasoning we rely on for historical and observational evidence is rigorous on its own terms, but it does not produce the same kind of consensus that controlled trials do. We will have to make calibration calls and stand by them.

The second is the median case. The framework handles the clearest engagements cleanly. It handles less well the median case — say, a study of an inclusionary zoning policy where findings of positive effects vindicate one coalition and findings of negative effects vindicate another, and both coalitions will use whatever we produce. Most studies are median cases. Refusing the clear cases is necessary but not sufficient.

The third is funding. Some funders will prefer not to support a research organization that takes substantive positions out loud. A trade association looking for policy-relevant analysis may want neutrality on the agenda, not stated values about which engagements we will not take. The cost is real, and we have not yet had to weigh it against a specific opportunity. We will. When we do, we will be honest about how we resolved it.

Where this lands

My wife Amy is a wonderful person. As part of her leadership training over the years, she developed what the training calls a possibility for herself — a stated commitment to who she chooses to be. Hers is the possibility of everyone loved, celebrated, and living an extraordinary life. She really does live by it. I have been thinking about that lately, alongside the question of what kind of research organization APR wants to be. The draft statement I would offer reads like this:

APR conducts policy research in the service of human flourishing — the capacity of persons to live fully expressed lives within their own situated possibilities. Compassion orients our work: we perceive how sociocultural conditions constrain capability, and we research the questions that lighten the weight of unearned advantage and disadvantage, while preserving the individual agency of people to make their own successes and failures.

Compassion at the foundation. The Kantian constraint preserved as a side-constraint. The marginalized as the population whose problems we choose to work on — without committing us to agreement with what the marginalized themselves want from the research. Standing for the disadvantaged is not the same as agreeing with them. We reserve the right to conclude against their premises and their wishes when the evidence demands it. The empowerment we offer is the discipline of method, applied to the questions that matter most for the people with the least. The findings go where the evidence sends them. That is harder than advocacy, and it earns more trust over time.

The AZT trials are a long way from most policy research. But the structure of the problem they exposed — the insufficiency of informed consent when the participants’ alternatives are constrained, and of individual ethics when the design itself takes advantage of those constraints — is the structure of policy research too. The Kantian framework will tell you what you cannot do to a person. It will not tell you which studies to design, which engagements to refuse, or which trade-offs to name. For that, a research organization needs a deeper commitment, written down, and acted on before the pressure arrives.

Discover more from Alliance for Policy Research

Subscribe now to keep reading and get access to the full archive.

Continue reading