(Originally published 3/21/2007 on zachmortensen.net)
The term moral hazard is generally used within the insurance industry to describe the tendency of the insured to increase their risk tolerance once their risks are hedged by insurance. For example, a consumer with a homeowner’s insurance policy might decide against returning home to double-check that she had turned off the stove or locked the door before she left the house. Or a consumer with a comprehensive auto insurance policy might not worry so much about parking his car in a bad neighborhood at night. In general, reducing people’s exposure to risk seems to increase their tolerance for risk, and the corresponding increase in risky behavior is what constitutes the moral hazard.
The concept extends to other domains. Professor Kent Smetters of The Wharton School taught that US Department of Transportation rules requiring seat belts and airbags in new vehicles have led to increasingly-reckless driving as drivers have felt more secure. And while driver and passenger fatalities have diminished due to the direct effects of these safety devices in the years since these regulations took effect, fatalities of motorcyclists, bicyclists, and pedestrians have increased proportionately with drivers’ reckless behavior during the same time period. Smetters joked that the most effective automobile safety device may not be a seat belt or an airbag, but rather a large dagger mounted on the steering wheel and aimed directly at the driver’s heart, as it would give the driver and any pedestrian roughly equal odds of dying in a minor collision!
Recent events in hospitals illustrate how moral hazard applies to healthcare technology:
It is impossible to assert categorically that healthcare IT reduces the overall risk within a given institution. True, technology may provide a hedge against specific risks, but there will be a moral hazard in the way human beings respond to that reduced risk. And it is difficult to predict ex ante whether the moral hazard will outweigh the direct effect of the reduction in risk that is being sought in the first place.
Such was the case with the infant abduction cited above. The hospital’s investment in RFID technology netted against the moral hazard created by the technology did not yield any benefit to the baby. In the end it was good old-fashioned police work — not technology — that tracked the infant down in Clovis, NM and led to her safe recovery. According to the article, hospital infant abduction is an exceedingly rare occurrance with fewer than 120 cases recorded in the past 20 years. It’s something that security guru Bruce Schneier would call a "movie-plot threat", a threat with infinitesimal probability that causes panic nonetheless because it appeals strongly to our emotions. The threat of hospital infant abduction is apparently compelling enough for the 900+ hospitals that had bought the VeriChip RFID infant security system as of 2005. I wonder how many VeriChip customers have also purchased marmoset insurance.
Unfortunately, too many provider organizations treat healthcare IT as an insurance policy rather than an investment that is expected to add value to the organization. Too many healthcare IT vendors enable the problem by recognizing that it’s easier and much more lucrative to sell on fear, uncertainty, and doubt (FUD) than to quantify and demonstrate a return. And ironically enough, moral hazard creates additional risk whenever providers and vendors try to mitigate risk with technology, resulting in a viscious cycle that consumes vast amounts of capital without providing any real benefit.
Wouldn’t we all be better off treating healthcare IT projects as investments rather than insurance and focusing on the value they can create within the marketplace?