P O S T E D B Y A L B E R T
Sean Stannard-Stockton, our blogger colleague at Tactical Philanthropy, poses these very difficult questions for his latest Giving Carnival: How should social impact (or nonprofit effectiveness) be evaluated? How can we best understand the output of nonprofit or for-profit social enterprises? It is not enough to simply say that an organization is “doing good.” How much good are they doing and how effective are they at turning “inputs” (donations and/or investment dollars) into social “outputs”?
I offer this modified version of an earlier post in reply ...
We seek a kind of scientific or moral certainty from a formal evaluation. But it can provide neither. The questions that funders most often bring to an evaluator—Was this program worth our $25,000 investment? Should we continue funding it?—are questions only they can answer. There’s simply no absolute scale against which an evaluator can measure the value of a philanthropic investment.
Here are some of the typical moves in the evaluation game:
|WE:||Was this youth development program worth our $25,000 investment?|
|EVALUATOR:||Dunno. What are the kinds of outcomes you’d like from a youth development program?|
|WE:||Better grades in school.|
|EVALUATOR:||Hmm. Don’t see any of that. Would you settle for a lower dropout rate? I see some of that.|
|WE:||That sounds good. How much lower?|
|EVALUATOR:||I see a 15 percent lower dropout rate. Is that worth $25,000 to you?|
|WE:||Don’t know. Let me think about it.|
The evaluator’s first job is to determine who is asking the evaluation question, and his second job is to discover what that person values. Then and only then can he design an evaluation that attempts to detect those values. An evaluator isn’t measuring the effects of a given program willy-nilly. He can suggest various outcomes that we might look for from a given program, but, for the most part, he’s trying to detect the outcomes we think are worth seeing.
And so we’re at the stage where we’ve detected a 15 percent lower dropout rate for youth participating in our program, as in the example above. Assuming it’s not an effect of how the young people were selected for the program, is the outcome—together with other secondary outcomes—worth what it cost to run the program? We can try to compare this program with other programs we know, but they appear to be very different from one another. Some have a mentoring component, others don’t; some include leadership training, others focus on academics. The annual cost per student is $5,000 in one program, but $10,000 in another, but there are significant differences in the quality of program delivery. At a certain point, the annual cost per student starts looking ridiculous—but what is that point? And is it the same for you as for me?
And so it goes. What then is the value to society of a given nonprofit program or organization? The simplest answer is that it’s whatever the market is willing to pay for it.
For those of us in the nonprofit sector, one of the hardest truths about evaluation is that the value to society, in dollar terms, of nonprofit work is as much determined by market forces as is the price of cheddar in Manhattan. And we, the people who ask the evaluation questions—the funders, the donors, the board members—are that market.
Good evaluators know this. They would gladly tell us, but often they intuit that we’d rather not know. Frequently their role is to describe all the potentially valuable outcomes of a given program or organization and then retire from the scene, leaving us to ponder what kind of investment it might be worth.
The counsel most useful to us at that point might come from a priest or a philosopher, or, if we’re on a very tight budget, a Ouija board.