Contributors


  • John
    Anger

    Dixie
    Moline

    Countess
    Apraxina

    Albert
    Ruesga

    Stuart
    Johnson

    Sally
    Wilde

Contact Us

  • Contact us by e-mailing courtesy_telephone(at)yahoo.com.

Good Karma ...

  • ... comes to those who leave comments on this blog. Even the briefest comments help give our lives meaning :o)

Terror Level

« A Message for Ben Bernanke | Main | Philanthropic Self-Help: The Rabelaisian Method »

March 25, 2008

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d834526b7769e200e5517d82128834

Listed below are links to weblogs that reference March Metrics Madness:

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Nonprofiteer

Outstanding contribution to the entire evaluation debate!--wish I'd written it. Thanks so much for your clarity on the issue, and particularly for the idea of "distributed evaluation."

Antoine Möeller

A university ombudsman said something I remember twenty years later:

We only fully embrace our students as members of the community as they are leaving that community.

He said other things I remember, too, like 'Sex is worth the hearing loss.'

Quite an impact.

Albert

Well, if you're going to lose your hearing anyway, might as well enjoy yourself while you're doing it.

"Impact" language in evaluation reminds me of the "target audience" language in communications. Very martial. Makes me want to duck behind a bale of hay.

Sean Stannard-Stockton

Wow! What an outstanding post, Albert. I commented in my own post today.

So glad to have you posting regularly again. Keep it up!!

Pete Manzo

Excellent piece! Great perspective on metrics and the fetishization of them.

Your concept of "distributed evaluation" is terrific. I recall many years ago a professional evaluator here in LA telling me that the best that any nonprofit could do is exactly what you suggested - learn what research, done by academics and others with that expertise, has shown to be most effective (this kind of reminds me, by the way, of the concept of "procedures," recommended practices based on experience, from the military or NASA.) , incorporate those practices into their programs, and make the argument to their funders that they work is designed in a way that should work. Unfortunately, usually the conversation flows the other way - driven by the funder's requests - and in addition, I imagine it will take a great deal of discipline for funders to refrain from asking grantees to collect data beyond the verifications of effective practices you describe in distributed evaluation. That's why you'll need the counter-revolution you call for, to make it possible.

Thanks for sharing this and all your posts.

mugwumpiana

Liked the essay, but I do wish to take issue with the following. You wrote:

"Note that it would be absurd for us to call the gas company, thank them for their outputs (namely, the gas they deliver to our houses), and then complain that they haven’t demonstrated to us any outcomes or impacts. Why is it that we reserve this nonsense for the people who work in the nonprofit sector?"

To carry your analogy forward, if the gas is the output, then the outcome would be the flame, and the impact would be the heat. We don't complain to the gas company about insufficient outcome and impact because the output is reliably high quality. However, if the gas company started producing bad output---if it started piping unburnable nitrogen rather than methane into our houses---then yes, very much so, we'd complain about insuffient outcome and impact: "Hey, my furnace shut down and it's freezing in here!"


Albert

Your comment raises an interesting question, mugwumpiana. What's the real analogue of an outcome or an impact in the case of the gas company? My own inclination—not shared by you, apparently—is to look at gas, flame, and heat as outputs. After all, the path from the first to the last in the series is the mere lighting of a match. When unreconstructed funders inquire after outcomes, however, they’re typically looking for effects that have a less direct (and causally efficient) connection to the simple output. Suppose your program is helping a young child with his homework five days a week. One of these unreconstructed funders might ask, for example, “What has been the effect of your mentoring on the child’s grades and test scores?” (outcomes), “Did the child go to college and ultimately serve society by going to work at the gas company?” (impacts), and “Did the example of the child’s success encourage Tv@rrr, denizen of a parallel universe, to take up the snackletuner?” (hyper-impact).

Analogously, one might ask the gas company representative to demonstrate to us that our funding of their product produces a net benefit for the United States. Can this representative prove, by basing his answer on the analysis of a highly compensated consultant, that overall the gas industry doesn’t simply increase our dependence on fossil fuels, leading ultimately to tragic outcomes and impacts for all? Even if we don't go this far, it still seems silly to ask the representative to demonstrate the gas company's social outcomes.

That’s my analysis. I can understand how other people’s results might differ.

mugwumpiana

Hey Albert, thanks for the reasoned reply.

I wonder if the tendency of folks to ask nonprofit grantees, but not the gas company, to demonstrate "impact" is that the whole raison d'etre of a philanthropic foundation is to produce social benefit---impact---in amounts greater than that which would have been produced by the tax revenue the government has failed to collect on the foundation's assets.

The gas company doesn't purport to do anything besides pump methane into your house---produce a simple output. It doesn't promise anything in the way of outcomes and impacts. What you do with the gas---short of blowing up the neighborhood---is your own business.

(On the other hand, much advertising "promises" outcomes and impact beyond the simple output of delivery of the product. When I write out a check to the Chevy dealer, the output I expect is the handing over of a set of keys to a shiny new Corvette. Imagine my disappointment when the implied outcome---enhanced personal sex appeal---and impact---hot babes galore!---fail to materialize.)

It's altogether right and proper to question the outcomes and ultimate impacts (if not the hyperimpacts) of foundation initiatives. The trick is to do so in a way that promotes good grantmaking rather than hinders it, and in that sense your caveats and objections are well taken.

Cheers!

Sean Stannard-Stockton

mugwumpiana, I haven't seen your comments in the past. Your comments are great and I'd love to see you over at my blog Tactical Philanthropy.

This is a great debate you two are having. In the for-profit sector, the impact is always the same: make money. Why do people buy the gas? Who cares. If the company can generate a profit, then the investors are happy. But as mugwumpiana points out, nonprofits only exist to further their mission.

Imagine a nonprofit whose mission was to prevent elderly people from dying due to cold weather. They would probably want to supply heat to the people's living quarters. The impact they seek is keeping people from freezing to death. The gas, flame and heat are only relevant to the extent they prevent death. If this nonprofit ran around hooking natural gas pipes up to peoples houses for free, the only question the funder should care about is: did you prevent deaths?

We don't ask the gas company for their impact because we don't care. If they are turning a profit, they are achieving their goal. But we ask the nonprofit for their impact (saving lives), because it may not be self-evident whether they are achieving their goal.

Teri Behrens

Great discussion! Warning -- long post.

Let me play out the gas company analogy another way. I want to be well-fed and warm -- these are the impacts I am seeking to achieve. The energy in my house for heat and cooking is the outcome that should lead to those impacts. And gas is one way -- but only one of several -- to deliver that outcome. I could have electric energy, which could be supplied by the local electric utility or by the windmill or solar panels on my house.

As a consumer, what I care about most is the impact, but I also care about the cost, efficiency and carbon footprint of the mechanisms for achieving that impact. I can do some research on those things and figure out what tradeoffs among cost, efficiency and carbon output I am willing to make. I make these tradeoffs, though, based on knowing fairly clearly the desired impacts -- the temperature I want my house to be, how much I cook, etc. I might revise some of these (keep my house cooler, for example), and I personally don't work this out on spreadsheet, but the variables are generally known. I could even do something like put in new windows or add insulation to help achieve the warm-house impact.

Take this to a social program now. If the impact we are trying to achieve is children reading at grade level by grade 3 (which research shows us is critical to long-term academic and career success), there are any number of ways we can try to have that impact: we can reconfigure schools, activate parent groups, revise curricula, train teachers differently, etc. What we don't know, however, is how to trade all those off to achieve the desired impact.

Ideally, evaluation of foundation programs should help us to learn how to make those tradeoffs. In reality, though, each particular school will have its own set of circumstances (it is a complex system that we are trying to change) such that mixing two parts teacher training and one part class size reduction and three parts parent involvement is not necessarily the recipe for success in every case. Or, to stick with original metaphor, the cost, efficiency and carbon footprint of these strategies differs widely in different settings.

The national movement toward evidence-based practice (most pronounced in health care, but gaining ground in education) is very similar to the "distributed evaluation" model. Methods that are shown to be effective in clinical trials or rigorous educational outcome studies are the ones practitioners are encouraged to adopt. This is the research that is funded by the federal government -- not foundations.

My belief is that what foundation evaluations can and should do -- working collaboratively with practitioners -- is to develop principles that guide the implementation of proven practice. Back to the recipe analogy, it might read "add flour and knead until dough is stiff" rather than "add 2 cups of flour."

This only works, though, if we know what we are trying to achieve. The metric about the final impact -- reading proficiency by third grade, for example -- HAS to be consistently measured. Otherwise, we can keep revising curricula and convening parents, but forget why we were doing it in the first place. This is not to say that goals won't change -- they should change if there is a reason -- but we don't want them to drift.

erasmus

You guys must not read a lot of for-profit mission statements.

Microsoft: To enable people and businesses throughout the world realize their full potential.

"Mr. Gates," asks the skeptical evaluator, "can you demonstrate to me that because of your efforts people are now meeting their full potential?"

Glaxo Smith Kline: To improve the quality of human life by enabling people to do more, feel better and live longer.

I'm not feeling so good right now.

FedEx will produce superior financial returns for shareowners by providing high value-added supply chain, transportation, business and related information services through focused operating companies.

Superior to what?

Teri Behrens

Which raises the question of who is doing the evaluating. For investors, all of those mission statements are implicitly preceded with... "To make money by..."

Jeane Goforth

Thank you so much for this enlightening post and discussion. Spent Saturday in a meeting with a non-profit consultant and supporters defining our new organization. The most frustrating part for me was discussing metrics. I find it difficult to see how what can be measured will show the real impact, especially short term. But the most significant thing you have made me realize is that my primary resistance to expanding the metrics we track is that my co-founder and I cannot take on any more responsibilities. The 'distributed evaluation' concept turned on a light bulb.

The comments to this entry are closed.

Your email address:


Powered by FeedBlitz

AddThis Social Bookmark Button

Less Recent Posts