But what if the programs aimed at solving those problems don’t work? Or cost too much? Or create unforeseen consequences? How can they be improved?
Only a handful of foundations try to address those bigger questions. They want, not just to solve problems, but to improve the way we solve problems. One of the most interesting is the Laura and John Arnold Foundation (LJAF), which was created in 2008 and reported assets of $1.7bn at the end of 2015. The Arnolds — he’s a former hedge fund manager whose net worth is estimated to be $3.3bn — say the LJAF will
systematically examine areas of society in which underperformance, inefficiency, concentrated power, lack of information, lack of accountability, lack of transparency, lack of balance among interests, or other barriers to human progress and achievement exist
and then apply “a rigorous and comprehensive entrepreneurial problem-solving approach” with the “goal of igniting a renaissance of new ideas and approaches applied to persistent problems.” Whew.
Big ideas, to be sure, but this foundation has already made big waves. As part of a wide-ranging criminal justice initiative, LJAF funded the creation of a database that has changed the way judges in numerous jurisdictions, including the states of Arizona, Kentucky and New Jersey and the city of Chicago, decide which defendants will be released before they go to trial. For better or worse–better, in my view–it’s a vivid example of how philanthropy can change public policy, as this story about bail reform in New Jersey explains.
LJAF stands apart because of its commitment to evidence, both to support its own programs and to improve the work of governments, foundations and nonprofits. In 2015, it launched a division called Evidence-Based Policy and Innovation, which, among other things, funds rigorous evaluations that include, where possible, randomized controlled trials (RCTs) designed to figure out whether programs work. LJAF’s evidence-based policy team is currently funding 41 evaluations of programs on a variety of topics, according to Jon Baron, the foundation’s vice president of evidence-based policy.
These evaluations, Baron told me last week, are “aimed at building the body of social programs with strong replicated evidence of impact on important life outcomes.”
Unhappily, that’s harder than you might think.
Most programs, when tested, don’t work
“A lot of programs claim to be evidence-based,” Baron explains, but “many of them overstate their evidence.” When subjected to rigorous tests, many well-intended programs fall short. “Unfortunately,” Baron says, “they (the tests) usually find that the program being evaluated does not produce the hoped-for effects. There’s a very high proportion of disappointing findings.”
Trained as a lawyer, Baron worked on Capitol Hill and at the Pentagon before founding a DC-based nonprofit called the Coalition for Evidence-Based Policy in 2001. He joined LJAF in its Washington office when the foundation absorbed much of the coalition’s work.
Given the Trump administration’s disregard for facts, let alone evidence-based policy, I asked Baron for, er, evidence that evidence can change how the government operates. A clear example, he told me, is the work of the Nurse Family Partnership, a nonprofit that provides home visits to low-income, first-time mothers; several long-term randomized trials found that these home visits improve children’s well-being, even 10 or 15 years later.
The Bush and Obama administrations expanded funding for the Nurse Family Partnership, its affiliates and similar programs that provide home visits to pregnant women. Current federal funding is about $400m per year. “There’s no question that the evidence drove the funding, in both Democratic and Republican administrations,” Baron said, adding that not all the programs have been rigorously evaluated.
Meantime, many of LJAF’s evaluations are done with foundations and nonprofits. The Michael and Susan Dell Foundation, for example, has backed Bottom Line, a nonprofit that helps low-income, first-time students get into college and graduate; preliminary studies indicate that the program works, so LJAF is funding an RCT being led by academics Andrew Carr of Texas A&M and Ben Castleman of the University of Virginia. “Bottom Line has some pretty good evidence, but they wanted to produce definitive evidence,” Baron said. The Bottom Line study is expected to cost only $159,000, in part because it relies on existing information that track student achievement. “You can do large, low-cost randomized trials,” Baron says, particularly if they build on existing data. The gains from learning which programs work and which don’t should far outweigh the costs if, subsequently, more money flows to effective programs and less is wasted on those that are subpar.
Two other examples of LJAF-funded evaluations:
- A three-year RCT of a program in Baltimore that provides free eyeglasses to disadvantaged students to improve learning. Partners include Johns Hopkins University, the Abell Foundation and Vision to Learn, a Los-Angeles based nonprofit.
- A seven-year RCT of a Big Brothers Big Sisters mentoring program for young people at risk of criminal involvement. The evaluation will track about 2,500 young people at 20 Big Brother Big Sister agencies across the US.
I asked Baron why rigorous evaluations aren’t more common. Cost is often a barrier, he said, and it’s not easy to find researchers with the knowledge and organization skills to carry out RCTs. But “the main bottleneck,” he said, “is that there’s not a strong incentive yet to build this kind of evidence.” Some funders demand evidence of impact, but many do not. “Evidence of effectiveness, in many cases, is not a main criteria in deciding what gets funded,” he said. Which is a little nutty, no?
To promote evidence-based policy, Baron and LJAF have built a website called Straight Talk on Evidence that aims to “distinguish credible findings of program effectiveness from the many others that claim to be.” It’s very good; you can subscribe or follow the blog on Twitter at @NoSpin Evidence. This week, Straight Talk on Evidence called attention to a well-designed study of a program that helped prevent sexual assault on three university campuses in Canada.
Two concluding thoughts: First, LJAF’s commitment to evidence-based policy comes at a time when the validity of social science research–and, for that matter, medical research–is being questioned as never before. (For more, read last week’s fascinating NY Times magazine story, When the Revolution Came for Amy Cuddy.) As it happens, LJAF has a separate initiative on Research Integrity that, among other things, addresses what has been called the replication crisis; it’s funding efforts to reproduce psychology and cancer studies.
Second, LJAF’s work on evidence raises questions about whether other large foundations might want to take a similarly expansive approach to improving philanthropy. The Gates and Raikes Foundations, for example, are trying to improve the quality as well as the quantity of charitable giving, Gates through the Giving Pledge and Giving by All and Raikes with Giving Compass. The Open Philanthropy Project wants to empower the effective altruism community. Thinking even more broadly, some foundations aim to strengthen science or promote democracy because, they believe, those forces are the underlying drivers of health, education and prosperity. Such efforts have the potential to do enormous good–although their benefits will prove hard to measure. Which reminds us that, valuable as it is, evidence cannot answer all our questions.