Nonprofit Chronicles

Journalism about foundations, nonprofits and their impact

Slow progress is better than no progress. So GuideStar’s Platinum designation, which encourages nonprofits to share their results in an important new way, deserves a round of polite applause, if not three cheers. It’s a step in the right direction, as well as a reminder of the long road that lies ahead–if the destination is, as it should be, a social sector that identifies and rewards the most effective nonprofits.

And, on a personal note, I’d like to thank Jacob Harold, the president and CEO of GuideStar, and his colleagues for developing Platinum because it is going to save me time – by helping me decide which nonprofits are worth my attention as a reporter, and which are not.

Let me explain.

Based in Washington, DC, nonprofit GuideStar is the world’s largest source of information on nonprofits. Unlike, say, Charity Navigator or Charity Watch, it isn’t an evaluator or watchdog group. Instead, GuideStar confers upon nonprofits what it calls a “status” – Bronze, Silver, Gold and now Platinum – that depends upon how much information each nonprofit chooses to share. Bronze requires basic information, Silver requires financial data, Gold requires descriptive information about impact — essentially, the answers to five simple yet powerful questions known as Charting Impact — and Platinum, introduced this month, requires charities to report at least one quantitative metric about their work.

Yes, just one quantitative metric. This is a low bar by design, Harold told me by phone the other day.

“At this point, we just want to provide a platform for nonprofits that want to share data about how they achieve their outcomes, on their own terms,” he said. “We want to accurately reflect how nonprofits think about their own progress.”

As it turns out, because nonprofits are given so much latitude–they can choose from a menu of about 700 common results or select their own–the metrics that nonprofits choose to report can be revealing, sometimes unintentionally so.

Pencils of Promise (PoP), a global education NGO, reports not just on how many schools it supports (an output) but on how its interventions improve student performance (an outcome). PoP says its 5th and 6th grade students “see 100% greater improvements than their peers in reading comprehension” and students whose teachers get support from PoP earn promotions to the next grade at an 88% rate, very slightly better than a control group. That doesn’t tell you a lot, but it does indicate that PoP is trying to measure things that matter.

Technoserve, which develops market-based solutions to global poverty, also appears from its Platinum profile to be measuring its impact in meaningful ways. Technoserve  reports that in 2015 its work attracted $16.7 million in private sector investments in agriculture and $19.8 million in investments in business and benefited 319,000 farmers, businesses and employees “for whom we have evidence of increased revenue or a new job as a result of our support.” What kind of evidence is unclear, but claims like these are a good way to begin to analyze the group’s effectiveness–or, if you are potential major donor, to begin a conversation about impact.

By contrast, NGOs that choose to report on their Twitter and Facebook followers or the number of hours of training they deliver are not saying much about their effectiveness. Then there is the conservation group with a budget of about $14,000 in 2o14 that reported that it protected and saved 137 “endangerd species” (sic), a claim that raises more questions than it answers.

Outputs and outcomes

Platinum makes no distinction between outputs and outcomes, a crucial difference, as industry consultant Robert Penna writes on his blog:

As an example, if an organization delivers meals-on-wheels, presumably to enhance the nutrition and health of the recipients, what is the value in reporting on the number of meals delivered? Yes, it might suggest how busy the organization is.  But unknown is whether the meal was consumed, whether the intended recipient was the one who ate it, or whether it was having the intended effect.  How, therefore, does a count of meals delivered help the organization do a better job?  What guidance does such a metric give the donor?

McDonald’s boasts of “billions and billions served,” Penna notes, but to what end?

Harold, of course, is well aware of the limits of Platinum. He has written:

This launch is the first step of a long journey. Right now our primary goal is to accurately describe the diversity of programmatic measures across the nonprofit sector. There’s an immense amount of variety among types of nonprofits, and how they’re measuring their activities, outputs, outcomes, or impact. We want to honestly reflect that variety.

Then—and only then—can we move together as a sector toward more of an outcome orientation. The field has too often let the perfect be the enemy of the good when it comes to performance measurement.

At GuideStar we take for granted that outcome measures are more powerful than output measures. But we also recognize we cannot instantly skip to a world where all nonprofits have perfect outcome metrics.

135d0bbMeasuring impact is especially hard for advocacy groups, which typically work in coalitions and over long periods of time.

Harold, who has spent a decade working on effective philanthropy, at the Hewlett Foundation and Guidestar, told me: “I’ve been humbled how hard it is for some organization to measure lasting results.”

Incentives?

For GuideStar, the next step is to persuade more charities to achieve Platinum status. Fewer than 1,000 have done so, but the Platinum offering is less than a month old. About 8,000 nonprofits are rated Gold, and another 29,000 are Bronze or Silver.

GuideStar profiles are widely distributed, so there’s an incentive for nonprofits that are doing good work to make sure their information is complete, up-to-date and meaningful. GuideStar partners and clients include the donor-advised funds of Fidelity, Schwab and Vanguard, and more than a dozen community foundations. “Our hope is that other people — other than GuideStar — will make judgments about the quality of the data that groups are sharing,” Harold says.

As a reporter who writes about foundations and nonprofits, with an email inbox that is overflowing with pitches from PR people, I’m ready to use GuideStar as a filter. After a month or two, which should be ample time for nonprofits to fill out their profiles, I’m going to turn down requests for coverage of nonprofits that do not either (1) have Platinum status or (2) have a recommendation from an independent evaluator such as GiveWell, The Life You Can Save, ImpactMatters and Animal Charity Evaluators.

Woo_hoo!_posterIt’s too much to ask every nonprofit to supply independent evaluations of its work, particularly because donors are reluctant to pay for them. But it’s not too much to ask every nonprofit to measure its impact in a meaningful way. When that becomes standard practice in the social sector, we’ll have real reason to cheer.

One thought on “Polite applause for Guidestar Platinum

  1. clemsgems says:

    Marc,

    I found you when I was looking for some information on my book. Very good stuff. A little about me. I sold a software company in 1999 and have spent the last 15 years in and out of nonprofits and other software companies. I started a Catholic college in Georgia, USA and a residential Catholic high school in Ghana, Africa. I found a lot of scary practices from well intentioned but clueless Nonprofit employees / volunteers. With that, I decided to write a book, aptly titled, “How to Run a Nonprofit”.

    I would like to send you a copy. Please respond to start a dialogue.

    Sincerely,

    Tom Clemets
    http://www.howtorunanonprofit.com

    Like

Leave a comment