« Cyber-Donors: If You Cut Them, Will They Not Leave? | Main | The Problem With The Internet, Part 1: The Story of Joe Sector »

March 27, 2007


Feed You can follow this conversation by subscribing to the comment feed for this post.


I am much more interested in metrics of the bad being done by all entities, whether business, nonprofits, or government. How can we measure the good done without knowing the bad? Without an objective and measurable good/bad universal scale, we are not being scientific.


The purpose of metrics isn't to establish the dollar value of a project. As you say, that's determined the same way as the dollar value of cheddar in Manhattan.

Metrics serve two vital purposes: (1) helping us to decide *between* different charitable options, as we try to do as much good as possible (not just "some good") with our donations; (2) helping the charities themselves to learn what works and what doesn't, and improve their own ability to help people.

There is nothing easy about metrics, but they're the only way I know of to these benefits, benefits we simply can't leave on the table.

Albert Ruesga

Phil: Why measure the bad when we can all simply agree that mistakes were made?

Holden: I think some metrics accomplish what you say for some projects. When a youth development program starts costing $25,000 per young person per year, might it not be better to sock that money away into college accounts for the participants? I don't know. So many axes of value to compare -- quality of instruction, intensity of intervention, paucity of other opportunities: how do we assign weights to these? We can never compare two youth development programs the way we compare two cans of soda: they differ in too many respects.

As for internal evaluations, those we use to improve the programs we ourselves design and run, I suppose we need to ask whether the cost in dollars and opportunity is worth the improvements we imagine we'll eke out of the process.

In any event, aren't we always informally evaluating our own work, constantly trying to find new ways to improve it? And is the explicit articulation of "metrics" really part of this process?


Yes, the explicit articulation of metrics is essential. Informal evaluation and formal evaluation both have advantages and disadvantages - and that means if you're using only one, you're missing out.

Of course there is such thing as unnecessary and overly costly evaluation. But in this particular sector at this particular time, which way do we need to be pushing? I'd argue that we're way too far toward having NO information about what charities accomplish. Never mind a formulas - we literally have no reasonable way, either analytical or intuitive or emotional, to decide between different donations.

From my experience in the for-profit sector, I have been constantly, and I mean constantly, knocked off my chair by the extent to which measured reality diverges from what I expected based on my own internal logic and intuitions. It isn't because I have bad intuitions, it's because reality is a tough cookie. The history of scientific progress is the same story.


The only metrics I've seen in this discussion are micro level. Without an understanding of the whole situation, measurement is almost useless. The blind men are arguing about metrics to count hair on the elephant without having a single idea about the elephant itself.

The most arbitrary thing here is dollar value. It is different for everyone, and philanthropy is not different. What is more important, how the donor values the result or how the beneficiary values what was given?

Phil, not only must we measure the bad, the negative outcomes of any activity, but we must remember what is missing. Who isn't being served, who is being left outside of every metric, not counted in the total.

Sean Stannard-Stockton

I think Gerry's comment regarding who's value are we counting is spot on. I tried to make this point in a recent post.

I think we should be measuring total value generated. My point in the post was that with a charitable donation, only the donor is putting a number on the value. Using the logic of the market system, the donor is only valuing the benefit that accrues to them personally. But the true value of the work done by the nonprofit is the benefit that accrues to society as a whole.


Letting the donor name the value is just being honest. Value to society *is* valued by the donor. To the extent it isn't, the donor won't pay, no matter what a formula says.

Saying "Fixing this problem would be worth $X to society" is, in the end, just not useful to anyone. If that's wrong, please explain who is supposed to make use of it. By contrast, saying "Spending $X would result in Y lives saved" is meaningful, and helps donors make the decisions that are theirs whether you like it or not.

Sally Wilde

Looking at this conversation from the outside, it appears to me that you agree on so many things: that explicit metrics are valuable, although you might not all agree on the range of their application; that only the donor (here interpreted broadly as “funder”), not the evaluator, can decide if the outcomes of a given intervention are worth $N to the donor, who, after all, is likely the one who raised the evaluation question in the first place; and, finally, that any responsible donor would consider the value to society as a whole of a given intervention, not just the value to him or her.

The way I interpret it, the discussion introduced a series of correctives. Sean wanted to stress the fact that it’s not enough for an organization to do good—although one might ask, not enough for whom? It is enough for me that a particular youth development organization provide a good experience for the young people it serves, and that its staff continuously try to improve the program, with or without the assistance of formal evaluations.

Holden reminds us that there are various uses of formal evaluations, each with its own kind of logic. One kind of evaluation helps us choose between two charities or “different charitable options.” I took it as one of the points of Albert’s post to illustrate how fiendishly difficult this can sometimes be, and how our desire to impose metrics threatens to take us deep into the realm of pseudoscientific hooey.

Gerry rightly reminds us of the multiplicity of points of view from which value can be measured, what some critics have called the embarrassment of axiologies.

I forgot where I was going with this …

Mark Petersen

I guess I see metrics as not the end game, but the starting point. Yes, they are necessary, and the rigor involved in quantifying output/impact is important. And we require these from our grantees.

But once we have the stats - what then? I hope they open up the way for a conversation between donor and NPO on whether the 'market' considered it a worthwhile purchase, which determines then whether to re-invest. There are all sorts of nuances and intangible benefits to society and to individuals that result from a financial intervention. The numbers themselves don't tell the whole story.


Do we all agree on everything? Let me repeat what I find most important:

Formal evaluation driven by good, concrete metrics is essential. Nobody has said at any point that metrics are the end goal or the only thing we need. And we all know they're fiendishly difficult to conceive and measure. But they are ESSENTIAL.

If we all agree on that, perhaps we can go to the next step and talk about what sorts of metrics should be used? GiveWell's take is right here. I'm interested in your thoughts.

The comments to this entry are closed.

Your email address:

Powered by FeedBlitz

Contact Us

  • You can contact us by e-mailing courtesy_telephone (at) yahoo.com.


  • John






Terror Level

Less Recent Posts