A feral howl on the conventional method of assessing household water affordability, part 1
Recently a colleague asked me how I first got interested in water and sewer affordability. I was tempted to claim that it was pure concern for the welfare of my fellow human being. That’s certainly the main reason. But I think there’s another factor at work: my near-pathological intolerance of bad measurement in public policy.
This is the first in a series of posts in which I
advance a critique of rant about the long-standing, widely-applied standard method of measuring water and sewer utility affordability in the United States. The lengthier, formal version of this discussion is in an article published earlier this year in Journal AWWA.
Next week I’ll be in Las Vegas at the AWWA Annual Conference & Exposition, where I’ll present better metrics, show some initial results of a new national study of affordability, and join Kathryn Sorensen in telling the story of how better measurement is helping to shape policy in Phoenix. But in this post I’m just going to rant.
The most widely applied method of measuring water/sewer utility affordability in the United States is to calculate the average residential bill for a given utility as a percentage of the community’s Median Household Income (%MHI). Usually this percentage is calculated for an entire utility, but sometimes it is calculated for a neighborhood or a census tract. This percentage is then compared with an affordability standard (usually 2.0 or 2.5 percent). A simple binary declaration follows this standard: if a utility’s average bill as %MHI is less than this standard, then it is deemed “affordable;” if it is greater, then it is “unaffordable.” Sometimes these %MHI standards are applied separately to water and sewer rates, other times they are combined water-plus-sewer costs (thus the standard becomes 4.0 or 4.5 %MHI).
This approach has become the default way to measure affordability in both scholarly research and policy analysis, with no other rationale than that it is convenient (a 4th grader could do it on the back of an envelope) and conventional (everybody else does it that way).
The reasonable origins of a silly idea
The %MHI method originated with the EPA, where the metric was first developed as a gauge of a community’s financial capability for purposes of negotiating regulatory compliance by its utilities. The idea of %MHI as a measure of financial capability traces at least to the EPA’s 1984 Financial Capability Guidebook. The idea of identifying specific %MHI thresholds for financial capability apparently come from EPA’s 1995 guidelines on Water Quality Standards (though these documents don’t offer an empirical or theoretical rationale for 1.0, 2.0, or 2.5%MHI standards).
As a measure of utility-level financial capacity (can a utility handle the costs of regulatory compliance?), average bill as %MHI isn’t crazy—it gives a vaguely reasonable sense of the utility’s demands on a community’s financial resources.
Here’s where it fails…
Unfortunately, at some point over the past 20 years, researchers, policy analysts, managers, and rate consultants started using %MHI as a measure of household-level affordability (can a poor family pay its water/sewer bill?). Despite its widespread use, %MHI is seriously, fundamentally, irredeemably flawed as a metric of household affordability.
The main trouble with using average bill ÷ MHI as a measure of affordability is that it does not measure affordability—at least not at the household level, in the way that most people think about affordability. Although EPA has drawn a great deal of criticism for its affordability metrics, this misapplication of %MHI to household-level analysis really isn’t EPA’s fault; the federal government never intended for their utility-level metric to be used in customer-level analysis. Alas, lots of otherwise smart and well-meaning people have done just that.
Over the next few posts I’ll get into the many reasons why average bill ÷ MHI is a terrible, horrible, no good, very bad metric.