The Real Lesson of That Cash-for-Babies Study

Breaking news: A new study on a contentious political issue confirmed that a lot of people’s preexisting opinions were correct all along.

Perhaps I’m being a little unfair. But when The New York Times published an article about the study last week, it seemed perfectly designed to garner “I told you so”s.

The study, published in the Proceedings of the National Academy of Sciences (PNAS), reported that unconditional cash transfers to poor mothers changed their infants’ brain activity. Using a method called electroencephalography, or EEG, researchers placed a special cap wired with electrodes on each 1-year-old’s head to detect electrical activity as signals were sent back and forth across their brain. Babies whose mothers received $333 a month, the study’s authors claimed, had more of the brain waves that tend to be linked to better cognitive and socio-emotional skills. Since it came from a randomized trial (not just an observational study), the result seemed groundbreaking, with important policy implications. The cash transfers had allowed the parents to change some aspects of the kids’ environment—perhaps through better nutrition, less parental stress, or any number of other impacts—and in turn, the babies’ brains had changed for the better.

But within days, that triumphant tale was unraveling. Reviewers on blogs and social media, myself included, pointed out that the study didn’t paint nearly as clear a picture as its authors and media coverage suggested. Vox updated its article to add criticisms of the study; the Niskanen Center, a think tank, added a disclaimer to its blog; and the UBI Center, which looks into research on universal basic income, removed its posts on the study entirely.

What went wrong? Put simply, the study provided very weak evidence. Few of the findings were statistically significant. The data suffered from a lot of noise. But what’s more interesting is why so many people were so eager to share the news of research whose results were ultimately anemic. I think two factors are at play: The study’s methods were based in neuroscience, and it had moral implications. Alone, either of those ingredients can tempt people to uncritically believe a study. Together, they’re a recipe for hype.

The PNAS paper had an undoubtedly impressive setup: saying that a well-run randomized controlled trial produces the “gold standard” of evidence is a cliché, but that’s because it’s true. One thousand children were recruited for the study, and their families were randomly chosen to receive either $333 or $20 a month (the former was, on average, a 20 percent increase in income for the families). Researchers took EEG readings from 435 of the kids when they were 1 year old, and compared the patterns from the $333-a-month and $20-a-month groups. In the paper, the researchers reported that the kids whose mothers received the higher cash amount had more high-frequency “beta” and “gamma” waves, which the brain tends to produce when a person is paying a lot of attention to a task.

But it’s not clear that there really was any meaningful difference between the brain waves of the two groups of babies. Several critics, including the Wharton School’s Joe Simmons and Drake University’s Heath Henderson, pointed out that after the authors ran a statistical correction for false-positive results, all of their planned, preregistered analyses gave statistically nonsignificant results. The only significant findings appeared when the authors ran extra, unplanned analyses. (This is explained in more detail in the Astral Codex Ten newsletter.) These analyses are less convincing than the preregistered ones because they were decided on after the researchers had seen the data. If you already know how the data look, there’s more of a chance of unconscious biases creeping into your analysis decisions, subtly shifting the results in the direction you favor. This is the very phenomenon that preregistering your analyses is designed to avoid.

The statistician Andrew Gelman also looked into the study’s data—which, to the authors’ credit, they shared openly online—and found that splitting the children into two random groups and running the same analysis produced very similar-looking differences in brain-wave patterns to those found in the study. In other words, the pattern of differences between the $333 and $20 groups could have been a product of chance.

Even if the statistical results were clear-cut, though, we would have to follow a daisy chain of logic to conclude that they have societal implications. The most obvious leap is the one between the brain measures and the kids’ psychological development.

There’s something beguiling about a study that uses a brain measure as its main outcome, rather than a boring old test result or a self-rating on a questionnaire. It seems to suggest that the research is getting at something deeper—and more scientific. But that isn’t actually true. As the psychiatrist Sally Satel and the late psychologist Scott Lilienfeld argued in their 2013 book, Brainwashed, scientists (and everyone else) tend to be so excited by high-tech brain-imaging results (“This type of therapy changed the metabolic activity in drug addicts’ brains!”) that they forget to ask more prosaic, but more important, questions (“Did the therapy reduce the addicts’ reliance on drugs?”).

The cash-transfer study supposedly found that extra cash affects kids’ brains. That might be interesting to neuroscientists, but since the study doesn’t report direct evidence of a behavioral effect, it’s not of much use to anyone else—at least yet. The authors did examine the effects of the cash transfer on one psychological variable: parent-reported language “milestones,” such as whether a baby starts to say “ba-ba” and “da-da” at the expected age. But the results were so underwhelming that they were relegated to an appendix and only cursorily referenced in the main paper.

If the brain-wave results don’t relate much to behavior now, then in order for the paper to matter for policy, it needs to make a convincing case that they might matter sometime later. The researchers’ plan is to follow the children for several more years and eventually run such behavioral analyses. But for now we need to rely on the previous literature. The authors cited a few studies that found correlations between brain-wave measures and cognitive abilities in older children. But these studies were themselves quite small and ambiguous; EEG is far from straightforward to measure, especially in infants, which adds a lot of noise to the findings. Not only that, but as the researchers themselves pointed out, some other studies have found no such links. Ultimately, the neuroscience in this paper ends up at quite a remove from the psychological effects that society is really interested in.

Plenty of people desperately want results like these to be real and meaningful, because they (understandably) want to use science to help poor mothers and their children. This study is just asking to be deployed by advocates for cash transfers or a universal basic income. The researchers couldn’t have planned it this way—after all, the experiment began in 2018—but the study appeared at the same time President Joe Biden is pushing to expand the child tax credit, which likely added extra incentive for supporters of the policy to conclude that the association between cash transfers and baby-brain development is ironclad.

But one single study—especially one with as many issues as this study—should never be taken as unshakable proof of anything. Think about what would’ve happened if the study had shown absolutely no effect, or if it had found that the $333-a-month kids had significantly worse brain function. Proponents of cash transfers would not have thrown their hands up and started lobbying against them—nor should they. Think of science the same way you would the news: Pay too little attention, and you risk lacking the information you need to live a healthy life and be a responsible citizen. But grasp onto every breaking story, and you’ll find yourself wading through a sea of red herrings with no sense of the broader narrative.

And there is indeed a broader story about cash-transfer studies. They’ve mostly been run in low- or middle-income countries, and they show promising effects overall, with reviews in recent years pointing to potential benefits in, for instance, child nutrition and mental health. We still have a lot to learn if we want to get the most out of income-boosting interventions (for example, should they be combined with programs that coach parents on nutrition, hygiene, and child development?), but the point is that the scientific literature is large and nuanced. In that context, it’s a mistake to seize on every study that purports to show cash-transfer benefits and publicize them to the high heavens—at least without carefully checking their actual results.

The studies add to the many obvious reasons a society might want to make low-income families richer. Some are based on scientific evidence about the detrimental effects of poverty; some are based on ethical arguments about equity and equality; some are based on common sense. But on the list of the most compelling reasons, “it causes hard-to-interpret changes in a notoriously fickle and noisy brain-wave measure” is somewhere near the bottom.