In lecture yesterday, rather than ranting about the failures of regression, I tried to find a positive spin on causal methods. The Freedman critique deserves a month-long deep dive, and I’ll save that for next semester. Instead, I tried to explain how some of these bad ideas come from good intentions, and I attempted to present a simple derivation of LATE, the Local Average Treatment Effect. I had other plans too! But this surprisingly took the entire lecture.

Today, I want to summarize what we concluded. LATE is a clever observation but requires a lot of care to explain. Alone in randomized experiments, it doesn’t buy you much over the standard analysis. But it has warped the minds of economists who have decided it gives them license to extract cause through cleverness.

In its simplest form, the LATE is a way to estimate the causal effect of a treatment that an experiment can only indirectly probe. Using our causal graphs from Tuesday, suppose that we can apply an intervention, labeled here by Z. Z can cause the treatment we care about to happen, here labeled T. And this treatment T has some associated outcome Y. We’d like to measure the effect of T on Y.

The running example I used in class was cancer screening, but this model applies to almost any randomized clinical trial. In a trial, Z is the randomization, whether a person is assigned to treatment or control. Once randomized, a patient is offered a treatment T. Some patients accept the treatment, but some may decline. In the trial, we are not randomly assigning patients to receive a treatment. We’re randomly assigning them to be *offered treatment*.

This seems like a minor point, but it messes up our statistics. If we drop all of the patients who decline treatment and compute a treatment effect as though they weren’t there, we’ll end up with a biased estimate of the treatment effect. Patients who decline treatment are likely different than those who accept it. Perhaps they are more sick or of different education levels. There are a variety of confounding explanations.

There are two remedies to remove this bias. The first and the simplest is called the *intention to treat* principle. This principle demands we only estimate the effect of *offering* treatment on outcome. That is, if we intervene using Z, we should only estimate the effect of Z on Y. Yes, this is indirect. But the estimate is unbiased and unconfounded. And if huge numbers of patients are rejecting your treatment offer, then maybe your treatment isn’t as great as you think it is.

There’s an alternative remedy that uses statistics to cleverly estimate the effect of T directly, the LATE. What we’d like to do in advance is filter our trial to only the people who would ever accept our treatment and then look at the benefits in this winnowed subpopulation. Let S_{A} denote this subpopulation of people who would accept your treatment. Though you don’t know what S_{A }is in advance, LATE will provide a path to estimating the local effect of the treatment in this unknown subpopulation.

To apply LATE we have to assume:

The causal graph we drew is valid, so Z has no direct effect on Y, but Z has some effect on T.

The only people who receive treatment T are those who are offered it.

Assumption 1 is very reasonable. Assumption 2 is stronger than what is needed to apply LATE in general, but it’s reasonable in the context of a clinical trial, and it makes for a cleaner presentation in this short blog form.

With these two assumptions, we can derive the following relationship:

In words, this identity says that the treatment effect on the subpopulation who would accept treatment is equal to the ratio of the Intention-to-treat effect that we observe divided by the fraction of people who would accept treatment. Or, in other other words, the indirect effect of T on Y is equal to the effect of Z on Y divided by the effect of Z on T.

What’s nice about this formula is that we can use the standard estimators of average treatment effects to estimate the right-hand side of the LATE formula from trial data:

In randomized clinical trials, the LATE estimate is the standard estimate of absolute risk reduction using the intention to treat principle divided by the fraction of people who accepted treatment.

Now, what does this buy us? As a medical conservative, my take is this tells us that we should just report the intention to treat analysis in clinical trials. It is reassuring that a LATE analysis can’t turn an insignificant result into a significant one (i.e., the confidence interval for LATE will contain 0 whenever the confidence interval for ITT does). At best we’re just going to increase our risk reduction estimate by a small factor.

We did an example in class from the New York Health Insurance Plan trial of breast cancer screening. In this trial, only two thirds of the participants accepted the offer to receive a mammogram. The risk reduction of breast cancer death within five years of randomization using intention-to-treat analysis was 0.08%. Applying LATE, this number moved to 0.12%. Kevin Ma asked, “Is that good?” It’s a great question. The answer is in the eye of the beholder. I personally like the fact that LATE, which gives a more direct estimate of the effect we care about, doesn’t move the needle much. This should be reassuring for trialists.

Now, the problem with LATE is it has emboldened people (mostly in economics) to invent crazy thought experiments and present then as “rigorous” or “credible.” They assume that you can use this to pretend you randomized when you didn’t. Consider this dumb causal model for the sake of argument:

Here β is the treatment effect we’d hope to estimate, but C is some confounding variable, possibly unobserved, and we’d like to remove its influence. But now let’s assume we have some magical variance Z that satisfies

Then, if we multiply everything in our model by Z and take expected values, we’ll find the expression

This is precisely the LATE estimator. I’ve just written it this time with expected value notation instead of using sums. LATE has convinced people that they can just search for “instruments” Z that occur in nature to estimate all sorts of things from observational data. We want to estimate the effect of the number of police on crime, and use election years as the instrument because mayors deploy more forces during election years. We want to estimate the effect of family size on child outcomes, so we use the birth of twins as an instrument. The list goes on and on.

All of these observational papers are certainly nonsense. They are stories dressed up with hundreds of pages of statistical robustness checks. When the instrument is intentionally randomized by the investigator, instrumental variables are useful statistical tools to estimate indirect effects. But let’s not pretend there is any value to the fanciful instruments hallucinated by the imaginative armchair experimenter.

Extra points for the Fugazi reference.