We are bombarded with marketing messages every day, at every turn. These messages are carefully crafted, meticulously tested, and precisely delivered in order to maximize their influence on our perception and behavior.
Most of the time consumers don’t stand a chance against marketers. There is great asymmetry in the information available to these two classes. Marketers spend much time and money trying to learn how consumers think, whereas fewer and fewer consumers seem to be capable of thinking at all.
But every once in a while I find a company that makes a marketing mistake so flagrant and egregious it actually gives me hope that consumers might see right through it. Here is an example:
A recent article on a company blog touts a survey (done by the same company) that claims 80% of doctors surveyed believe that “virtual assistants” will significantly change the way that doctors do their work within five years.
The problem, of course, is that in a healthcare context no one knows what a virtual assistant really is or does. And the world’s best-known virtual assistant is used daily by only 33% of its customers, has sites devoted to its endless gaffes, and has been said to possess the intelligence of a hammer.
So how could 80% of physicians feel so ebulliently optimistic about something that doesn’t yet exist and whose nearest ancestor is barely useful?
There is an Uncertainty Principle at work in marketing that is not well appreciated by most consumers (and apparently some marketers): You can educate your customer about a new product, and you can measure their perception of your new product, but you have to be careful about trying to do both at the same time.
Why? It’s called priming, and it’s one way that marketers manipulate survey results, whether intentionally or through ignorance.
If your customers know nothing about your product before you survey them, then you naturally want to educate them a bit before you ask them what they think. You’ll tell them all about your vision and the great things that your product will eventually do. But in doing so you are priming them. Whatever they tell you next will be strongly influenced by the proximate experience of learning about your vision. If you create a positive learning experience, they will tend to respond positively even if your vision has only a tenuous connection to reality.
The correct way to measure perception is to do it right out of the gate, at the beginning of the survey instrument, before you’ve perturbed the respondent in any way. Otherwise you’re not measuring reality, you’re measuring a survey experience that is likely more ideal (better) than your product.
So if a survey claims that 80% of customers are excited about a product that doesn’t exist, the survey probably has a priming problem.
Let’s assume for a moment the contrary position, that the survey methodology is sound. Now if a company had a crystal ball that accurately predicted future customer preferences, why would they shout its output from the housetops? Wouldn’t they just build that irresistible product and corner the market before competitors could respond?
Of course they would. So this survey obviously couldn’t connect the dots between perception and intent to purchase. Which means this whole exercise is about nothing more than creating the perception of a positive perception about a product that no one really knows about because it doesn’t exist.
Ironically, Google served up the following ad (Google ads on a company blog? Really?) at the bottom of the post in question:
And if Google can deliver such a perfectly relevant ad when it detects unadulterated marketing BS on a company blog, maybe there is some hope for the BS detectors in consumers’ brains after all.
This is the third installment in a series on the gun control issue. In our first episode we examined data that demonstrate a correlation between fewer guns and an increased variance in the homicide rate. In the second we explored game theory as a hypothesis to explain this phenomenon. This time around we’ll look into another hypothesis, one that can explain why past gun control laws have failed to keep guns out of the hands of criminals and what this means for future legislative efforts.
To begin, we have to go back to school. Way back. You’re a four-year old living in the Bay Area in 1972. You are friends with a girl whose dad is a professor at Stanford. His name is Walter Mischel, and you are about to help him understand something that will inform our conversation on this issue.
One day your parents drop you off at Dr. Mischel’s lab for play time. After some opening activities, you find yourself alone with a researcher, sitting at a kid-sized table in an otherwise empty room, staring down a giant marshmallow. You tell the researcher that you like marshmallows very much. She smiles and says that the marshmallow is yours, and you can eat if you want to.
But wait, there’s more: She says she has to leave the room for about 15 minutes, and if you haven’t eaten your marshmallow when she returns, she will give you another one, too! The researcher exits the room and you are left alone with your marshmallow, weighing your options.
You’re a precocious little preschooler, future Stanford material, so you calculate your rate of return on this trade:
- You have one marshmallow
- If you invest that marshmallow for 15 minutes in this grad student’s scheme, you will earn a second marshmallow
- Your return on investment, therefore, is 1 marshmallow/1 marshmallow = 100%
- Your rate of return is 100%/15 minutes, or a whopping (24 - 1) x 100 = 1500% per hour! At that rate you’d score more marshmallows than there are atoms in the observable universe in a little more than three days, assuming you could keep the grad student on the hook that long.
Fifteen minutes later the researcher returns and sees you rubbing your little hands together with glee, marshmallow still on the table. “How about double-or-nothing?” you eagerly ask. You’ve decided that you don’t need Stanford after all. You’re on your way to starting your own hedge fund!
Your less-strategic friends didn’t fare so well in the same study. A few of them ate the marshmallow right away. Others made an effort to delay eating the marshmallow but just couldn’t pull it off: Fewer than 1/3 of the study participants were able to wait the full 15 minutes and earn the second marshmallow.
You can read more about the marshmallow experiment here.
Let’s jump ahead a few decades and play a grown-up version of this game, called Zero-Coupon Bond. It works like this:
I need to raise some money, and you happen to have some money. I promise that I can pay you $100 one year from today, but I can’t pay anything until then. How much money will you lend me today in exchange for $100 one year out?
Let’s leave creditworthiness out of the question, I have plenty of assets for collateral. The real issue for you is the opportunity cost of your money, i.e. the return you could make on the money you lend me if you were to invest it somewhere else.
Let’s say you offer to lend me $90 today in exchange for $100 a year from now. I calculate that the cost of that capital is ($100 – $90) / $90 = $10 / $90 = 11.1% per year.
You admit that the rate is a bit loan-sharky. You blame your experience at Stanford 41 years ago for your unrealistic expected rates of return. We agree to split the difference: You loan me $95 today, I will pay you $100 a year from now. That’s an interest rate of $5 / $95, or 5.26% per year. Not great, but at least I didn’t have to pay an entire universe of marshmallows.
We call this a zero-coupon bond because it is a bond (a promise to pay) that features no coupon payments, regular payments of interest (like you pay on your mortgage) before the bond matures. The principal ($95) is paid back with interest ($5) in a single payment at maturity. And we like zero-coupon bonds because they are really simple instruments that let us experiment on the relationship between present value and future value. They’re the financial equivalents of lab mice.
So here’s a thought experiment: What if you had priced the bond at only $94? Maybe I wouldn’t have sold you my promise to pay. What if I had asked $96? Maybe you wouldn’t have bought it. It turns out that $95 was a very special price for us: You and I were both at least indifferent to $95 today or $100 a year from now, and so the deal was done.
But it’s not $5 that are important, rather the interest rate those dollars represent. This rate, 5.26% per year in our example, helps us establish a relationship between present value and future value. In finance we refer to this as a discount rate, the rate at which we discount future value to bring it back to the present so we can compare what we spend today with what we will earn in the future, or vice-versa. The higher the discount rate, the lower the present value of future cash flows.
And this discount rate has a special place in decision science. In a business the discount rate for future cash flows is the firm’s cost of capital. A business that raises capital at a cost of 12% will not undertake projects that yield future returns of less than 12% per year if its managers are rational. The business is indifferent to projects that yield exactly the discount rate and it will generally consider investing in projects whose yield exceeds the discount rate. Managers sometimes refer to the discount rate as the “hurdle rate”, because it is the rate of return that an investment needs to “clear” in order to be considered.
It turns out that each of us humans is running around with our own individual hurdle rate in our head. We are faced with a constant stream of decisions that trade off future vs. present rewards:
- Do I sleep in, or do I wake up and go to work? Sleeping in pays off now, work pays off later.
- Do I have dessert? That would definitely pay off now, but skipping it generally pays off later.
- Do I smoke the next cigarette? Smoking might pay off now, not smoking pays off later.
- Do I lie, or do I tell the truth? Lying pays off now, telling the truth pays off later.
- Do I contribute to my 401(k) with each paycheck, or do I spend that money on entertainment? You get the idea.
For those decisions that clear your individual hurdle rate, you tend to choose the future reward. For those that don’t clear the hurdle rate, you tend to choose the present reward.
So what is the range of hurdle rates that we encounter in society?
It’s huge. Frighteningly so.
Recall the marshmallow experiment above: A return of the entire universe in 3 days failed to clear the hurdle rate of more than 2/3 of preschoolers. Children in general have incomprehensibly high hurdle rates for future rewards as most parents will attest.
Drug addicts also have high discount rates when compared with the general population. Many acknowledge that they take enormous risks in order to get their next fix, then do it anyway, over and over again.
Persons who are mentally ill exhibit impaired decision-making similar to that of substance abusers.
And these three classes of people are disproportionately represented in the US prison population:
- 56% of inmates in US prisons have been described as mentally ill.
- 85% of inmates are addicts, have previously been addicts, or were under the influence of drugs or alcohol when they committed the crimes for which they were sentenced.
- And “virtually 100%” of incarcerated juveniles charged with capital offenses are “multiply disabled” by the trifecta of neurological impairment, psychiatric illness, and cognitive deficits.
It seems safe to assume that that the discount rate among those who end up incarcerated is quite high, meaning that criminals likely to choose present rewards over future rewards, every time.
Here’s an illustrative example:
Going back to our robbery vignette from episode 2, let’s consider that you have $100 in your pocket and I have nothing in mine. I can choose to rob you and get $100 right now. I know that I have a pretty good chance of getting your money even if you put up a fight or try to flee.
I also know that if I commit a robbery, I run the risk of getting caught and going to jail for, say, a year. For the sake of easy math, I’m going to value my freedom at $100,000 per year, or the opportunity cost of all the fun I could have during that time as a free man.
So assuming I pull off the robbery and get caught, my payout would look like this:
I get $100 today and lose $100,000 over the next year. Pretty lousy deal at face value, isn’t it? Things don’t even look very good in present-value terms for a rational person whose discount rate is, say, 10%:
The opportunity cost of the first year of future incarceration is discounted by 10% (divided by 1 + 10% = 1.1) to arrive at a present value of -$90,909. The “net present value” or NPV is the sum of what I get immediately ($100) and the discounted future value of what that decision costs me over time ($90,909). So the NPV of a decision to rob you is $100 + -$90,909 = -$90,809. If I think like most adults, there’s no way I’m going to risk the robbery. Getting caught would be just too expensive.
But let’s use this model to ask a rather interesting question: How high would my discount rate need to be to justify the robbery? In other words, how much would I have to discount the future punishment in order to make robbing you today seem like a good idea?
It turns out that if my discount rate was 99,900% per year, I’d be indifferent to robbing you given the payouts above. That rate seems astronomically high, doesn’t it? If a person with that kind of discount rate was your lender in the Zero Coupon Bond game above, they’d insist on lending you only $0.10 today in exchange for a $100 payment one year from now. If a business used that discount rate to allocate capital, it would invest only if it could double its money in about a month.
But a rate of 99,900% per year equates to about 2% per day, or 750 times less than the discount rate of most preschoolers as measured in the Stanford marshmallow experiment. The disturbing conclusion is that this simple crime could very well pay off in NPV terms for someone with the discount rate of a child, or a drug addict, or one who is mentally ill. In other words, for precisely the classes of people who tend to end up in prison in the United States.
We who consider ourselves to be rational adults struggle to understand this fact, but that doesn’t stop it from scaring us. And we respond like we’re dealing with rational adults: If 1 year in prison for robbery doesn’t deter the crime, let’s make it 2 years.
But how does a criminal view another year in prison? We’ll keep the other terms (amount of money stolen, opportunity cost of a year in jail, discount rate) of the deal the same:
Wait a minute, what happened here? The NPV is now $100 + -$100 + $0 = $0, and the perpetrator is still indifferent to the crime!
Perhaps we “rational” adults forgot that discount rates, like other interest rates, compound exponentially over time. We discount the Year 1 punishment back to present value by dividing by 1 + the discount rate, or 1 + 99,900% = 1000. We calculate the PV of the second year of prison by dividing its opportunity cost by 1 + the discount rate squared, or 1,000,000. And for the third year we will divide by 1 + the discount rate cubed, or 1,000,000,000, etc.
So if the perp’s discount rate wasn’t big enough to discourage the crime given a 1-year sentence, adding more years isn’t going to make a bit of difference. You could threaten him with an automatic life sentence, and the crime would still pay off in NPV terms.
The threat of additional incarceration does nothing to dissuade those who are already predisposed to view crime as a good investment. This idea is counter-intuitive for those of us who have reasonable discount rates and are therefore motivated to stay out of jail, we naturally view obeying the law as a better investment. Incarceration thus becomes a means of removing the high-discount-rate persons from society. And there sure seem to be a lot of them lately:
What’s causing this? And what can be done to change the status quo?
First, we need to recognize that even those with the highest discount rates for future consequences cannot discount immediate ones. Consider the internet café armed robbery video we discussed in the last episode: The bad guys viewed robbing the café and its patrons as a good investment in spite of the high likelihood that they would be incarcerated. However, they quickly (and dramatically) changed their minds when faced with the threat of immediate death. Their discount rate didn’t change, but the consequences were moved out of the future and into the present where the discount rate is irrelevant. Criminals respond better to immediate consequences than to future ones.
Second, we need to recognize and begin to address the futility of the growing mountain of new laws that are aimed exclusively at people who have no intention of obeying existing ones. These legislative actions seem to be popular and therefore help our officials justify their bids for re-election, but do they do any real good? And how do we calculate the cost of already-law-abiding people having to navigate an endless web of rules?
Finally, we can go back to the Stanford marshmallow experiment and internalize a few of its findings from follow-up studies of the same participants:
- Having a low discount rate in preschool (i.e. tending to choose future rather than present rewards) was correlated with being described by parents and peers as more competent ten years later.
- Choosing future rewards in preschool was correlated with achieving a higher SAT score in high school.
- A 2011 study of the same participants indicates that the ability to choose future rewards over present ones remains with a person for life. Brain imaging showed key differences between the marshmallow-eaters and non-eaters in areas linked to decision-making and addictions.
Most importantly, a similar study in 2012 found an underlying factor that significantly affected the subjects’ ability to choose the future reward: The participants were divided into two groups, one that was primed with a broken promise before being presented with the first marshmallow (the unreliable tester group), and one that was primed with a fulfilled promise (the reliable tester group). Subjects in the reliable tester group went four times longer before eating the marshmallow as compared with subjects in the unreliable tester group.
Whoa there, pardner.
Are you saying that the way a preschooler makes decisions is correlated with the way he will make decisions as an adult?
And are you saying that these behaviors in preschool ultimately predict the way his adult brain will work at the cellular level?
And you’re saying that, while he’s still in preschool and this behavior is not yet hard-wired into his neurons, this tendency to choose future rewards — that will make him more competent, more successful, and even keep him out of jail — can be increased by something as simple as keeping your promises to him?
That’s exactly right.
But none of these facts will help anyone get re-elected. And so this time around we’ll get a new set of gun control laws, which will be obeyed by the law-abiding citizens and completely disregarded by the criminals. All perfectly rational, of course.
In my last post we undertook a cursory analysis of data related to firearm ownership and homicide rates across various jurisdictions. We concluded that strict gun-control laws and reduced firearm ownership are correlated with increased variance in the total homicide rate:
In other words, the worst-case scenarios are worse under strict gun-control laws and lower rates of firearm ownership.
Mere correlation, however, does not necessarily imply causation. We need a hypothesis that explains how gun control can enable more murders to be committed, then we need to test that hypothesis empirically before we reject it or fail to reject it.
But first, you and I need to rob a bank. For educational purposes.
You suggest that we knock off a local branch of an unnamed bailed-out bank holding company that recently froze the accounts of a legal firearms manufacturer. I insist that no weapons or threats of violence are needed to rob a bank these days, we’re peaceful, enlightened criminals who have no need for such things. You go inside, walk up to the teller, demand money, and leave. I drive the getaway car. Electric, of course. We stash the loot in an abandoned building in Detroit and lay low in our hideout until the whole thing blows over.
We’re up late one night watching Current TV. Suddenly, sirens wail! We look at each other in a panic, aware that the jig may be up. We reaffirm our loyalty to one another, we pinky-swear that we will never confess to our crime or implicate one another, no matter what the cost. You start to cry. I steel my resolve and pick up a pointy stick, ready for battle.
The cops burst through the door. I raise my weapon and ask to see a warrant. They taser me. You chortle a bit when you see me flailing about on the floor. I can’t feel my legs. You get handcuffed, I get hog-tied. This is the last time we see each other. They put us into separate cars and haul us off to the Graybar Hotel. We’re booked and held separately overnight.
Late the next morning a jail officer leads me to a drab, windowless room and motions for me to sit in a rather flimsy and uncomfortable chair. The jailer exits and another man who looks like Barney Frank waddles through the door and plops his corpulence into the chair across from me. He leans forward and squints for a moment through thick glasses.
“We know everything,” he intones somewhat nasally, his fetid breath inducing a wave of nausea. I gag and try to cover it by clearing my throat.
“Of course!” he retorts sharply. “Your partner sang like a canary.” He manages a wry smile.
I put on my very best poker face. I’ve been practicing regularly in anticipation of a moment like this. After an awkward silence, Barney’s doppelganger turns a bit redder in the face, inhales sharply and bellows:
“You have two options! Confess everything, and you’ll get five years. Or keep quiet, and you’ll get twenty years. Either way, we own you!”
He produces a cassette tape player and hits the record button. I am momentarily startled by the reappearance of such an ancient technological artifact, then I close my eyes to concentrate and mentally draw the following matrix:
As a result of some serendipitous timing and your convincing work at the bank, we made off with a total of $1 million that we have agreed to split 50-50. So if you and I cooperate by honoring our pinky-sworn agreement to not rat each other out, each of us will receive $500,000 once the prosecutor realizes he doesn’t have enough evidence to convict us.
But if I honor our agreement and you defect, I go to jail for 20 years (for the purposes of the exercise, I value my freedom at $100,000 per year) and you probably get to cut a deal with the prosecutor to walk away with only probation after you testify against me. You’ll say the whole thing was my idea, that I hid the money and you have no idea where it is. And once I go to jail, you’ll pick up the $1 million payout at that abandoned building in Detroit. Given the way you laughed at me last night while I was being shocked into oblivion, how do I know that this wasn’t your plan all along? You sly devil!
It’s clear that you’re better off if you defect, and I now realize that if you do, I’m much worse off if I don’t defect. We both go to jail for five years and the bank will get its money back, but that’s preferable to me doing twenty while you live it up with all that cheddar.
You’re probably being put through this same ordeal in the room next door. We’re locked in a distributed Battle of the Wits. I bet your interrogator looks more like Princess Buttercup than mine does.
This little anecdote represents a specific instance of a game that economists call Prisoner’s Dilemma. Generally, you and I would both be better off if we cooperated, but we each have an incentive to cheat. And if there is an incentive for you to cheat, the rational thing for me to do is defect, and vice versa. We call this rationally-optimal state a Nash equilibrium (named after the mathematician, not the point guard).
It turns out that Prisoner’s Dilemma can help us model all sorts of interactions in which the players have a choice to either collaborate or defect: Advertising, the use of performance-enhancing drugs in sports, OPEC oil production quotas, etc.
One of the most famous applications of game theory was the military doctrine of strategy known as mutually assured destruction that defined how the Cold War was carried out. The United States and the Soviet Union each had enough nuclear-armed missiles to destroy the other several times over. We cooperated by not firing them at each other. If we had defected by launching our missiles at the Soviets, they also would have defected by launching their missiles at us. We’d both be annihilated, which is the worst possible state, so neither of us launched the first strike.
But if we had disarmed unilaterally, mutual destruction would no longer be assured. There would be no reciprocal penalty for the Soviets should they defect and launch the first strike, and vice-versa. The ironic reality is that two nuclear-armed superpower rivals are safer than one nuclear-armed superpower who could strike with impunity with no threat of reciprocal strikes.
Let’s use game theory to test gun control scenarios.
You are walking down the street with $100 in your pocket. I am sitting on the corner with nothing in my pocket. As you approach, I have a strategic decision to make:
- I can cooperate by letting you pass, or
- I can defect by attempting to rob you of your $100.
Being a rational thug, I quickly create the following payout matrix in my mind:
If I choose to cooperate, you keep the $100 and I get nothing, which seems like a pretty good deal for you but a lousy deal for me. If I defect and decide to rob you and you cooperate, I get the $100, and you keep nothing. If I attempt the robbery, you could decide to defect by running away or putting up a fight. I size you up and figure that I have a 90% chance of winning either a fight or a foot race, which makes my expected payoff 90% x $100 = $90. Therefore you have a 10% chance of keeping the $100 plus a 90% chance of me breaking your nose and sending you to the ER, which makes your expected payoff 10% x $100 + 90% x -$1000 = -$890 if you decide to defect.
My worst-case scenario if I defect is that I get $90. My best-case scenario if I cooperate is that I get nothing. So, rationally, I step up and demand that you hand over your money. When I do, you instantly calculate the same payoff matrix and decide that while $0 is worse than $100, -$890 is a lot worse than $0. So you cooperate. The Nash equilibrium here is that I defect and you cooperate, and as a thug this equilibrium pleases me immensely.
But I have a problem: There are people who I can’t intimidate into cooperating. I can segment my “market” and target only the easy prey: Women, smaller men, people walking alone at night. But these targets get wise to my strategy and start changing their behavior to reduce their vulnerability. They walk in groups to improve their odds of escape or winning a fight against me. They cross the street when they see me up ahead. They don’t go out after dark when it’s harder to see me and there are fewer Good Samaritans to rescue them.
I need to change my strategy to adapt. I could partner with a few of my friends to improve the odds of success against stronger victims or groups of people, but we’d be a bit conspicuous sitting around waiting for someone to rob. And I’d have to split the loot among the group, which I don’t like. And being thugs like me, they can’t be trusted.
Here’s a thought: I could use a weapon. I could present the weapon as I make my demands. People are conditioned to fear weapons, almost every time they see one in the media it is associated with something bad happening to a good person. And if I’m armed, I’ll be able to take on stronger individuals and even small groups of people, increasing the size of my target market, my expected profit per transaction, and my win rate! Here is the updated payout matrix:
Let’s assume you have a 1% chance of being brave/stupid/fast enough to run from someone who is able to threaten you with a deadly weapon. And while I have no intention of actually using the weapon on you, it’s important that you understand that you don’t know what my intentions are. All you know is that if I use the weapon on you, your very negative payout represents a reasonable risk of immediate death.
Clearly, the armed-robbery business is much better than the unarmed-robbery business. For the thug, at least. And things don’t have to stop at mere robbery. A few early successes can induce grandiose delusions or narcissism as the thug realizes the power he wields over his victims. The thrill of power may lead him to act out other, more perverse fantasies to increase his “payout” at the expense of his victim, who is rationally willing to settle for any outcome marginally better than death.
But let’s say that you, the victim-in-waiting, have studied game theory a bit as well. You understand that the Nash equilibrium above is bad news for you. It sets you up to be either a victim or a hermit, and neither of those is any way to pursue life, liberty, and property, not to mention happiness.
What would happen if you exercised your natural right to defend yourself? You happen to live in one of the 49 states that issue concealed carry permits (the one holdout, Illinois, was recently ordered by a Federal court to come up with a concealed-carry permit program within 6 months), so you obtain one and carry a firearm legally. Here’s how the payout matrix changes:
This changes everything. The thug knows that if he attempts to rob you, armed or otherwise, you are entitled to defend yourself. If he picks the wrong target, he’ll get two rounds to his center-of-mass and one between the eyes. That’s a very bad ending for any rational being, even a thug. So in a scenario where there is a chance that the victim may be armed, the Nash equilibrium is for both parties to cooperate (the upper-left cell).
That’s a lot of words to illustrate what happens in this short video. Two armed tough guys attempt to knock off an internet café in Florida. They begin rounding up the patrons to separate them from the possessions (and who knows what else), when an armed 70-year-old rains (lead) on their parade. Spoiler alert: People get shot (though you wouldn’t know it from the video), nobody dies, and you may laugh out loud when you see the replay of Thug 1 running over Thug 2 as both cowards try to squeeze through the exit door at precisely the same moment:
If you think all this game-theory nonsense is the result of me twisting microeconomics to fit a uniquely modern problem with crime and violence, you’re wrong. Take a look at this excerpt from Cesare Beccaria’s Essay on Crimes and Punishments, as quoted in Thomas Jefferson’s “Legal Commonplace Book”:
Laws that forbid the carrying of arms…disarm only those who are neither inclined nor determined to commit crimes. Such laws make things worse for the assaulted and better for the assailants; they serve rather to encourage than prevent homicides, for an unarmed man may be attacked with greater confidence than an armed one.
So, can correlation imply causation? Yes indeed, provided there is a hypothesis that can be tested and validated empirically. I think the game-theory hypothesis is quite compelling. Based on the evidence I’ve studied, I even think it’s correct. But I can’t in good conscience recommend that we carry out a randomized double-blind controlled study when the lives of innocent people are at stake. We need to look at the variance that already exists in crime rates and firearm ownership, there is a mountain of evidence that we can use if we are willing to do the science rather than acquiesce to political demagoguery.
Articles like this one began to appear soon after last month’s tragic massacre in Connecticut. The logic embodied in them is straightforward: Guns are designed to kill people, and more guns mean more dead people, therefore fewer guns mean fewer dead people.
The reasoning is so patently obvious, in fact, that anyone who argues against common-sense gun control like a ban on assault weapons and high-capacity magazines is either an idiot or a murderer-in-waiting, and we can’t afford to let either of these join the conversation. We just don’t have time.
I mean, come on, people! How many more innocent children have to die before we figure this out?
Before we jump any deeper into an emotionally-charged conversation, let’s take our own look at the data. And let’s commit to intellectual honesty. No fudging the facts or conveniently excluding data that do not support our thesis.
Let’s zoom out and consider the issue more broadly than others usually do: Murder of an innocent person in any form is evil, and guns certainly aren’t the only way that bad guys kill. They also use knives, swords, baseball bats, crowbars, rocks, pointy sticks, ropes, lead pipes, candlesticks, automobiles, bombs, poison, fire, water, and even their bare hands, among other things too numerous and too gruesome to list. So why analyze only firearm-related murders?
To the murdered innocent and those who love him the precise manner of his demise is of secondary concern, subordinate only to the principal fact that he is dead, that he has been unjustly deprived of life and this evil deed cannot be undone.
Well, this is troubling. It looks like there are 20 or so countries that have significantly lower firearm ownership rates and homicide rates that are at least as high as that of the United States, as if a higher firearm ownership rate was correlated with a lower overall homicide rate.
How could there be so many murders in those countries with so few guns? Puzzling. The model is probably skewed by all those Spanish-sounding names in the upper left quadrant.
Since we’ve committed to be intellectually honest, we can’t simply exclude these “outliers” and compare the United States against, say, Northern Europe – which is buried in that cluster of blue points in the lower-left quadrant of the graph with practically no guns and practically no murders – even though doing so would clearly support our thesis that fewer guns would make us safer. No, we’ll take the high road and try to find another dataset that controls for factors like living in a narco-state vs. a Scandinavian socialist utopia.
Here is Graph 2, in which we compare Homicide Rate vs. Firearm Ownership Rate by State. The firearm ownership data come from a 2001 survey of 201,881 households by the Behavioral Risk Factor Surveillance System (whose name sounds very safe and trustworthy in spite of containing the word surveillance) and are paired with the 2001 intentional homicide rates by state (compiled from various sources):
This whole exercise is starting to get frustrating. This graph exhibits the same troublesome negative correlation between the firearm ownership rate and the homicide rate as the previous graph, but the best-fit regression line is logarithmic, which means that there might be indirect effects for the first few armed households, i.e. their guns may be indirectly protecting more households than one. The R-square indicates that 35% of the variance in the homicide rate is explained by the firearm ownership rate, which is significant even though the factor’s coefficient contradicts our thesis. Regardless, 65% of the variance in the homicide rate across jurisdictions is explained only by factors other than firearm ownership, which doesn’t bode well for our idea that we just need fewer guns to make our society safer.
But most disturbing of all, three of the four jurisdictions with the most-restrictive gun laws and the lowest firearms ownership rates (Washington DC, US Virgin Islands, Puerto Rico) have murder rates that are 2-3x higher than those of the states with the next three highest murder rates (Louisiana, Alabama, Mississippi), all of which have much-less-restrictive gun laws and rates of gun ownership that are 8-10x higher than the more-restrictive states. And this particular cohort pairing shares a number of otherwise blameworthy factors (high poverty rates, high unemployment, poor public education systems, etc.). Not good at all.
Could the United States really be that similar to the rest of the world? These household firearm ownership statistics are probably comparable to the rest-of-world firearm ownership numbers, as they may control for the fact that the US, due to high per-capita GDP, likely has many more households with multiple firearms than does any other country.
Enter Graph 3, in which we compare Homicide Rate vs. Firearm Ownership Rate for the US (by state/territory/district) and the Rest of World:
Controlling for US households that own multiple firearms, then, it looks like the distribution of household firearm ownership by state is well within that of the rest of the world. And the same inverse relationship between the firearm ownership rate and the homicide rate seems to hold.
There are exceptions, notably Hawaii and the cluster of blue points that surround it, all of whose names have been invoked by the media with great reverence as examples of the success of strict gun-control laws, but we cannot ignore the reality that more-restrictive gun laws and a reduced firearm-ownership rate can be associated with a significant increase in the homicide rate, both in the United States and the rest of the world.
To say that this observation is counter-intuitive would be a gross understatement.
But really, how bad could it be if we enacted sweeping gun-control legislation at the national level? Let’s assume what looks like the worst-case scenario, with the entire United States adopting the position of Washington, DC:
There would be 3.8 households out of 100 that own firearms (down from a national average of 31.7 per 100, Senator Feinstein would be pleased) and there would be 40.3 homicides per 100,000 people (up from a national average of 4.7 per 100,000 in 2011).
That’s an 8.5x increase in the national homicide rate.
That’s 125,840 homicides per year.
That’s more than 12 Sandy Hook massacres every day.
Perhaps the data do not necessarily support the easy, ”intuitive” arguments for gun control after all. It seems that disarming a population is correlated with increasing the variance in the expected homicide rate. There are good outcomes, make no mistake about that. But there are also some very bad outcomes, on the order of 10x worse than the status quo.
But we can still ban those scary-looking assault rifles, right? And those big high-capacity magazines like the terrorists use. There’s clearly no point in having those if you aren’t planning on killing lots of people. They’re no good for hunting, and you don’t need that kind of firepower to defend yourself against a mugger or a rapist or a burglar in the middle of the night.
Wrong. Dead wrong.
It turns out that Washington, DC is far from the worst-case scenario for homicide. And it turns out that gun violence, or violence of any type, is far from the most efficient way to massacre innocent people.
The most efficient weapon in the history of the world is forced starvation, and the Ukrainian language has a word for it: Holodomor (literally hunger-extermination).
The Holodomor was Stalin’s answer to the problem posed by the kulaks, relatively prosperous farmers who owned their land and had resisted collectivization following the Bolshevik revolution. When Soviet economic policies that favored trading grain for farm machinery caused food to become scarce, the government responded in August, 1932 declaring that all food was property of the state and mere possession of food was evidence of a crime. Crops were seized from the kulaks and redistributed to party loyalists under a quota system, while the kulaks and more-affluent peasants were left to starve. By the most conservative estimates the Holodomor killed 3 million people in Ukraine during the following 12 months and 6-7 million during that same time period across all of the Soviet Union.
One of those victims was my wife’s great-great-grandfather, who died in August, 1933 under forced labor on what had previously been his family farm. He had sent two of his sons to the United States prior to the Bolshevik revolution, and today our family is fortunate to have a collection of his letters to those sons, letters that document in excruciating detail the tightening grip of the tyranny that ultimately took his life.
Can you fathom the loss of 3 million lives in one year? How about 3 million out of a total population of 29 million?
That’s a homicide rate of 10,345 per 100,000.
In the United States today, that would be 32.5 million people in a year.
That would be more than 3,300 Sandy Hook massacres every day. For a year.
Here is Graph 4, in which we compare the Holodomor to the overall homicide rates in the rest of the world. The scale of the vertical axis is now logarithmic to accommodate the magnitude of this atrocity. The Holodomor was 100x worse than the homicide rate of modern-day Honduras, currently the world’s most violent country. You will note from the graph that there was no legal firearm ownership in Ukraine in 1932, the Soviets had outlawed firearms in 1929 after years of kulak resistance to collectivization:
The actual food confiscation of the Holodomor was carried out not by bureaucrats but by urban thugs loyal to and armed by the regime. They raided farms, built watchtowers over the fields to ensure the peasants weren’t “stealing” grain (the penalty was summary execution), abused the peasants going to and from work, and amused themselves by raping the women who lived alone in the countryside. These mobs had been half-starved themselves and brainwashed by propaganda into thinking they were doing their civic duty, and there was nothing that the disarmed kulaks could do to stop them.
Think this type of massacre couldn’t happen today? You’re wrong. One is brewing right now: Venezuela has long had a strict permit process for firearm ownership. Last year Hugo Chávez enacted sweeping new gun-control measures, banning all retail sales of firearms and ammunition. And just today, his heir-apparent vowed to crack down on “hoarding” and sent troops to take control of food distribution networks. Who will protect the disarmed producers when the loyalist paramilitary mobs come for their food?
Remember the Los Angeles riots of 1992? 53 dead, 2000+ injured, billions of dollars of property damage. 6 days before the National Guard and the Marines were able to restore order to the city. 45% of property damage was inflicted on Korean-owned businesses even though Koreatown was far from the epicenter of the rioting. The business owners had no one to protect them, their families, and their livelihoods — no one except themselves and their neighbors. One gentleman reported firing more than 500 rounds from his rifle — into the ground and into the air — to defend against the mobs of looters after the police had abandoned his block and before the National Guard arrived days later.
This is precisely why law-abiding citizens must never be banned from owning powerful weapons that are designed to defend against large numbers of armed, violent people when tyranny rises.
Sandy Hook was a tragedy, and we should learn from it and do better in the future. We need to look beyond legal firearm ownership as the cause of murder, since the data show that jurisdictions with stricter gun laws and fewer firearms per capita can have significantly higher murder rates compared to states with more firearms.
And as we try to learn from Sandy Hook, let us never forget the 32.5 million reasons why we have the Second Amendment.
I need to share some news with you. I met with the Board last night and we’ve decided that we need to make a change.
Mark, we’re terminating your employment effective immediately.
I see that you’re disappointed to hear this, and justifiably so. It’s clear that our business relationship has been very good for you. But the fact is that you just aren’t very good at the job that I hired you to do.
You see, Mark, I hired you back in 2007 to do something special: Your job was to help me stay connected with my friends.
To your credit, you really impressed me at first. I was reconnecting with people who I hadn’t seen in many years. It was great to hear about their lives and see pictures of their kids. I even started connecting with people who I regularly saw in real life. It was fun when they commented on a picture of my family or “Liked” a link I posted. I found your service especially useful when I relocated to distant cities a couple of times for new jobs. I felt like I stayed connected, even just a little bit, with good friends and colleagues who I left behind.
Of course, there were a few things that I found disturbing about your work from the outset. I could moralize about the countless hours you’ve enabled people to waste tending to their virtual crops and mafias while most of the real world is mired in a recession. I could bemoan that you’ve reduced interpersonal communication to nothing but a laconic Like. And I could point out that so many of your users seem to cluster into rather homogeneous echo chambers of political discourse as they “Unsubscribe” from any opposing opinion.
No, I noticed these quirks early on and have managed to work around them. None of them are the reason why you are losing your job.
Do you know what it was, Mark? Do you know what pushed me over the edge? You started interjecting yourself into my friendships.
You – who could not serve up a single relevant ad in the five years we’ve worked together in spite of knowing nearly every relevant detail about me – thought you would apply that same ineptitude to my news feed. You couldn’t seem to figure out that I already have a wife and an MBA, that I have changed jobs twice since Microsoft, and that if I haven’t wanted a pair of Bonobos for the past four years, I’m not likely to want them today. How are you qualified to choose what appears in my news feed?
In former times I simply chose to deal with this unfettered hubris of yours. I spent hours trying to train your creation to show me the stories I wanted to see. I changed feed settings. I made more liberal use of the Like button. I even gave feedback on the ads you showed me. (Do you realize that process takes three clicks? Three!) After much effort, it slowly became somewhat tolerable to use your service again.
And then, for reasons unknown to me, you tried to apply your distinctive incompetence to the way I organize my own content. Yes, you do know what I’m talking about, Mark.
You foisted Timeline upon us.
My initial reaction to that bastard child of user interaction design can only be described as emetic. Being wary of change in light of your mercurial track record, I dealt with Timeline by refusing to opt in, silently snickering whenever one of my friends cursed it in a post.
That is, until I logged in the other day and you informed me that I will be forced to use Timeline starting August 19th.
I was incensed. Do you know what I did, Mark? I sat down for an hour and painstakingly deleted all of the content on my Timeline. All of it. Every last photo, video, wall post, and check-in. And do you know what happened when I was finished? More of my content appeared! Things that had not been visible when I began the Great Purge of 2012 popped up on my Timeline. “Oh yes,” I thought. “I do remember writing that.” Delete. “What? More?!” Repeat.
That’s right, Mark. You can’t deny it. You went so far as to appoint yourself as editor of my own content! Nobody gets between me and my content. Nobody! You crossed a line that never should have been crossed, and that is why you’re losing your job.
It would be inappropriate for me to speculate about what caused you to become so disconnected from the job I hired you to do. And in the end, it doesn’t really matter to me. I have a job that needs to be done, and you’re not doing it. I’m going to hire others who will.
This isn’t to say that I harbor any ill feelings toward you personally. I just think you have some things you need to learn before you’re in a position to work for me again. I should add that the Chairwoman was never on board with me hiring you in the first place. I personally stuck my neck out for you, and your poor performance has cost me some political capital. If you hope to have your job back at some point in the future, you’ll have to work hard to get her to champion your cause.
In the meantime, I suggest you contact your broker and arrange to sell some more of your equity stake. I imagine there are a few poeple out there who have not used Timeline and are still willing to buy the shares.
Looks like HIStalk picked up my recent post on Epic over the weekend. I’ve been an avid reader of Mr. H since 2004, so to say I’m excited about the mention would be an understatement. To all the folks clicking through, please feel free to weigh in with a comment or even subscribe!
Mr. H says:
I’m not convinced Epic is changing strategy at all just because a couple of unnamed consultants speculated as such (Epic has always sold ambulatory-only deals), but if they are, I’d infer the opposite. Epic has not hit the predicted wall on scalability, customers keep giving its products industry-leading KLAS scores, nobody is de-installing or grumbling about value, and prospects keep signing up in droves despite high project costs.
First, I want to be clear that I have no evidence that Epic is refocusing on the ambulatory EMR market. I’m not saying that Epic has hit a wall, that its customers are not happy, or that they are grumbling about lack of value. My post was a speculative response to an HIStalk post about an alleged shift in Epic’s sales strategy. My intent was to say “If in fact there has been a shift, here are some potential reasons why a market leader would have made that decision.”
Mr. H continues:
Each time Epic sells an ambulatory-only deal, it (a) deprives a competitor of a new sale, and (b) plants a flag that has a decent percentage chance of yielding an easy inpatient sale down the road.
We’re in agreement on this point: Whether Epic considers its primary competitors to be high-end (Cerner, Allscripts, Siemens) or low-end (eCW, PracticeFusion, Amazing Charts), it makes sense to claim as many ambulatory customers as is economically feasible in the near term.
If anything, I suspect Epic is gaining confidence given the near absence of significant competition and is willing to ramp up sales, which by definition means they will be selling to smaller hospitals and practices.
The problem with this thinking is that in chasing smaller customers a vendor acknowledes saturation or diminishing returns in the upmarket segments. There is a fixed cost to landing a deal with any customer, plus the opportunity cost of the potential customers you aren’t chasing. It just doesn’t make sense to take 90% less revenue against those costs if you have better options on the table.
The company’s favorite statistics involve not the number of hospital customers it has, but rather the percentage of physicians and patients using its systems. I think they want that number to keep rising for reasons beyond financial, and any change in strategy can be attributed to unchallenged dominance rather than newfound desperation.
The interesting point here is that these percentage metrics tend to favor a vendor whose strategy is to pursue the largest customers, those who have the most patients/physicians/beds per hospital. A conscious decision to move downmarket means that each win moves these metrics less at the margin. And I don’t think I cast a downmarket move as “newfound desperation”. It could be a rational business decision based on feedback from the market under the four scenarios I originally described.
Finally, I’d caution against leaning too heavily on data from KLAS or other common sources to make inferences about current vendor strategies. The problem with most third-party sources of customer information is that they are only lagging indicators from a strategic perspective. A vendor makes a strategic decision, they operationalize the strategy, they close some deals, they do the implementation work, and 6-12 months later those new customers fill out some surveys. Cycle time: 3-5 years. These reports may contain clues about vendor strategies from 2008, but not from today.
Mr. H posted the following yesterday:
From Barry: “Re: Epic. A market research report suggests that Epic is backing off its push for inpatient installations and going with an ambulatory-only sales approach to plant the seed for future inpatient sales.” That was reported by two consultants quoted in the report, with an additional consultant saying that Epic is getting some pushback from customers who question whether they’re getting their money’s worth.
Why might the market leader choose to pull back? A few possible reasons come to mind:
- They may have hit a capacity constraint. The critical path to revenue in this business runs straight through a long implementation process. Even if you win 100% of the deals, you have to find consultants to staff the implementation teams, and these are in short supply. Thank HITECH for moving the constraint from the health system capital purchase committee to the headhunter staffing the implementation team.
- They may have crossed the midpoint of their target market segment. The Bass diffusion model generally predicts that peak unit sales occur when 50% of the target market has adopted a product. From that point on unit sales decline year over year, or the total number of customers increases at a decreasing rate. If this is the case, it would make sense for Epic to ease up on the throttle a bit and focus their efforts on other segments.
- They may be choosing price over quantity. Epic built its reputation in the inpatient world by being very careful about choosing its early customers and maintaining an air of exclusivity. An exclusive, customer-intimacy-driven business model does well in the left end of the demand curve where the customer’s willingness to pay is sufficient to fund the attention the vendor is willing to give. But a sufficient degree of success saturates the high end of the market and forces the vendor to look lower in the demand curve for new business. Epic may be taking a look at the margins of that business, as well as other brand-related factors, and simply saying “No thanks”.
- They may be recognizing the disruptive threat posed by low-end competitors in the ambulatory market. Epic is winning because of their inpatient/ambulatory integration. They started in the low end of the market, the ambulatory side, and grew the capabilities of their offering to the point where they could begin to compete in the inpatient world, and then they began having great success displacing the incumbents. Established inpatient vendors have struggled to cram their solutions down into the lower-end ambulatory market. This dynamic typifies what Clay Christensen has termed “low-end disruption”, that low-end players are more likely to evolve and displace high-end players than vice versa. If Epic is smart, they’ll recognize that a free or very-low-cost ambulatory EMR like eClinicalWorks or PracticeFusion could evolve and do to Epic what Epic did to Cerner, Eclipsys, and McKesson.
Time will tell whether Epic really refocuses its efforts on the ambulatory market. Even if it does, chances are that in order to fend off a disruptive threat by one of today’s low-end players, Epic will need to tweak its business model to enable SaaS delivery of its products at a very low cost with little to no configuration required.
I have noticed that much of the last few days’ search traffic to this site clusters around themes related to the recent HSG/GE Healthcare JV, the accompanying HSG layoff, wondering what will happen to the customers, etc. In an attempt to stay true to my roots as a marketer, here’s an attempt to give my readers what they seem to want:
I’ve had twenty or so conversations with former HSG colleagues over the past week. Some of these folks were affected by the layoff, some were not. Most of the salespeople seem to have been unaffected. Between last week’s layoff and the attrition leading up to it, about half of the marketing and product management team is gone. I’ve heard reports that most of the physicians were affected and that up to half of the engineering team was let go. By my rough calculation, that totals 25-35% of the overall headcount at HSG.
It’s hard to describe how it feels to hear this news. These are all extremely smart, passionate, hard-working people. Microsoft simply doesn’t hire people who don’t clear that bar. My message to each of those affected has been something along the lines of: This turn of events is not a consequence of your abilities or your work. Now is a good time to be looking for a job in this industry. Contact me if you need help networking.
On the topic of NewCo, it will be interesting to see how those invited to participate undertake their decision process over the next few weeks. I wonder how the merger of GE and HSG cultures will happen, or whether it will happen at all. HSG grew rapidly by acquisition and a pervasive culture never really had a chance to coalesce. The Azyxxi people were still Azyxxi people, and the same was true for the GCS and Sentillion people as well as the team that built HealthVault. Perhaps the emergence of a strong strategy to tie these assets together was a prerequisite for a strong culture to evolve as the teams came together to achieve a common goal.
An interesting litmus test: Will there be equity participation for the rank-and-file NewCo employees? Microsoft traditionally prides itself on having all employees participate in stock ownership, GE typically reserves equity grants for executives. Perhaps the answer to this question will foretell which way the rest of the NewCo culture will lean.
It’s been two and a half years since McKesson executed a major shuffle of the executive suite in its Technology Solutions business, and we’ve now seen what I’d consider to be the first major announcement since that change: The company is halting development on its Horizon inpatient clinical and revenue cycle products and doubling down on Paragon to the tune of $1B in new R&D investment over the next two years.
In the discussion that followed this announcement, several themes have emerged:
- This change is part of a broader initiative.
- ICD-10 and Meaningful Use precipitated a decision to rationalize the portfolio.
- The migration from Horizon’s Oracle platform to Paragon’s Microsoft platform will reduce customers’ TCO.
All of these reasons for the change make sense at first glance. But here’s what’s not being said:
- Epic is mopping the floor with these guys.
- Epic is winning on inpatient/ambulatory integration, not clinical/financial integration.
- Epic doesn’t seem to be bothered by its TCO problem.
My conjecture is that the MPT executive team is making a tactical retreat from the bloodbath it’s been enduring in Epic’s target market and is attempting to rally around Paragon, which is thriving in the sub-300-bed segment. These customers appear to be a bit more price sensitive, a bit less interested in following the herd, and a bit less dismissive of loose inpatient/ambulatory integration. If true, this approach by McKesson seems like it would have a higher likelihood of success than would standing toe-to-toe and trading punches with the champ.
A final data point that may add weight to this argument: At last week’s ASHP show in New Orleans, McKesson commanded a gargantuan booth on the main aisle in the exhibit hall. Immediately under the company name and logo on the front of the booth was a mock-up of a hospital outpatient pharmacy. The prominent signage advertising the solution? “COMMUNITY HOSPITALS”
I was happy to see today’s HIStalk interview with Michael Weintraub, CEO of Humedica. This is a company that seems to have an innovative product coupled with an innovative business model that is positioned for greatness as health reform kicks into high gear.
Humedica’s offering has a value proposition is similar to that of analytics offerings from Microsoft and dbMotion: Extract data from transactional systems and make it “liquid”, apply some semantics to it, and allow users to navigate it and turn it into insight.
What appears to be different, from my outsiders perspective:
- The solution appears to be delivered via a SaaS model that requires no hardware or software purchases by the customer.
- The business model appears to be free to providers, with the revenue coming from the provision of de-identified data to researchers (pharma, etc).
While these differentiators may appear to be straightforward, they will have an important effect on the reach of Humedica’s business.
The value proposition for noodling on one’s data has near-universal appeal, but there are two classes of buyers: Those who “get it” and are willing to pay handsomely for tools that allow them to supercharge their existing efforts at process improvement, disease management, research, etc.; and those who don’t “get it” but just think it’s cool to be able to do these type of things. The latter outnumber the former by about 10:1 in the provider space, meaning that the majority of the provider market for this type of offering is extremely price sensitive.
Humedica has the potential to navigate around this problem by making analytics effectively free to the provider in both time and money. And if they can build all the internal plumbing for each provider on the dime of research sponsors in pharma etc., they should be able to turn around and monetize that asset by delivering additional use cases at extremely low cost and charging handsome fees to providers and payers.
I’ve ranted about business models before, it’s nice to see some smart people who seem to have done the right thing by delivering an innovative product and service with a commensurately innovative business model.