Skip to content

Try homeschooling this week, risk-free.


r-SNOW-DAY-large570[1]

My wife and I homeschool our three children. We started when they were very young, so we never really faced the dilemma experienced by many parents in the public-school system who feel like they should homeschool but wonder whether they can make it work for their family.

But we have worked with many families over the years who have struggled with this dilemma. Our experience is that parents who try homeschooling find that it not only works for their family, it works much better than they had imagined it would.

If so many parents are aware that homeschooling their children is an option, if they see it working well for other families like theirs, and if they even feel that they should give it a shot, why do so many parents hesitate to try homeschooling?

I think it’s all about risk.

I’ve spent much of my professional life helping companies figure out why more customers aren’t buying their products. Many times the product isn’t the problem. I’ve learned that even a superior product with a “no-brainer” value proposition — one that literally prints money for a customer — will struggle if the company’s business model forces the customer to take on too much risk.

Homeschooling is no different. Research has consistently shown that homeschooled children score significantly higher than those who attend public schools on a battery of standardized tests regardless of factors such as gender, household income, education level of the parents, and homeschool regulation at the state level. But the risks associated with a decision to homeschool can outweigh even these well-documented rewards and lead parents to perpetuate the public-school status quo.

Parents perceive all sorts of risks in the decision to homeschool their kids:

  • What will my friends, neighbors, extended family, or even my spouse think of this decision?
  • How will I manage teaching so many subjects to kids of different ages?
  • Where will I find a good curriculum?
  • What about math? I don’t think I’m good at math. How will I teach them math?
  • What if my kids turn out to be socially awkward like that one other family?
  • What if I can’t stand to be around my kids for that many hours every day?
  • What if I fail? What if I ruin my kids because I’m not good enough as a parent or teacher?

Parents who choose to homeschool will have to manage all of these risks in the long run. The key is to find a way to get started without having to manage all of these risks immediately, before any of the rewards are evident.

And thanks to a particularly nasty winter storm, this week affords parents across much of the country a unique opportunity to try homeschooling risk-free. Think about it:

Schools are closed.

Parents are home from work.

Power may be out. If not, pretend like it is for a couple of hours. At least turn off the TV and the video games.

Sit down with your kids and explain that you’d like to do a bit of work together before turning them loose to play outside in the snow. Have them review their most recent math or writing assignment with you. Or watch a Khan Academy video together. Or memorize a poem, write a letter, or read a short story. Go to the library. Or read a relevant Wikipedia article on a topic they are studying in science or history.

What you do together doesn’t really matter at this point. Do something together that leaves you both feeling good about the experience and a little bit smarter than when you started.

You don’t even have to tell your kids, or your spouse, or your friends, or anyone else what you’re doing. At least not yet. A few of these baby steps will work wonders for your self-confidence and your ability to answer the questions that will inevitably follow.

Take advantage of this risk-free opportunity to convince yourself that you are more than capable of teaching your own kids. You have nothing to lose!

It’s time to cry for Argentina (again)


ImageMy experience with Argentina goes back nearly two decades to when I spent two years as a Mormon missionary in the Patagonia. I learned that the Argentine people are wonderful, the food is exquisite, the scenery is breathtaking, and the economy is never more than 5 years away from being a complete disaster.

While Argentina is now plumbing the depths of the Index of Economic Freedom, few know that it was one of the world’s 10 largest economies at the turn of the 20th century. In fact, for much of its early history Argentina’s economy was at parity with that of the United States.

But these two countries pursued very different economic policies that were rooted in their respective colonial histories: The United States made property ownership easy as it expanded westward, while Argentina created a new class of aristocrats who owned the vast majority of its frontier land. Both countries benefited from exporting agricultural commodities to Europe in the 19th century, but Argentina’s risk-averse landowner-oligarchs resisted industrialization. Global food prices collapsed in the mid-1920s as production in Europe bounced back after the end of World War I, and this shock hit Argentina’s undiversified agricultural economy especially hard. The global Great Depression a few years later further destabilized the economy, which in turn led to a series of political shocks: A dictatorship, a coup, a military junta, and a disastrous war with Great Britain.

In summary, Argentina represents how the United States might look in an alternate universe in which los yanquis had made similar political and economic decisions.

Argentina’s currency was quite stable during my time living there. In those days the value of the peso was pegged to that of the dollar. This trick brought Argentina’s chronic hyperinflation under control, from an annual rate of 3,000% in 1989 down to 3.4% in 1994, and restored public confidence in the local currency.

But problems soon emerged. The strong dollar-pegged peso reduced the competitiveness of Argentine exports, so provincial and city governments ran fiscal deficits in order to “stimulate” their economies and fight stubbornly-high unemployment. In those days I would have a number of conversations along these lines with people who I met in various cities:

Yo: “So what kind of work do you do?”

Fulano: “I work for la municipalidad (the city).”

Yo: (looking at my watch, Tuesday at 11:00 AM) “Hmm…”

Fulano: (smiling) “They only call me when they need me, che!”

The cash-strapped cities and provinces needed frequent injections of cash from the federal government, which in turn was unable to finance this spending by simply printing more pesos as it had done in the past. So all of these state and local deficits turned into federal debt, denominated in dollars and sold on the international credit markets.

Things started to get really bad for Argentina in the late 1990s. The economy was struggling under an increasing debt load, and the strong US dollar forced Argentina to maintain a strong peso while the currencies of major trading partners and competitors in Brazil, Europe, and Asia were relatively weak due to regional crises. There was no way for Argentina to remain competitive in global trade without devaluing the peso, which would require breaking its peg to the dollar, and this in turn would precipitate a default on the country’s dollar-denominated foreign debts.

The Argentine people are quite familiar with monetary crises. In 2001, after being mired in a nearly-three-year-old recession and fearing currency devaluation, they began withdrawing their pesos from banks, converting them to dollars at the official 1:1 exchange rate, and stashing the dollars in their mattresses (or outside the country if the mattress was too small). The government responded with a set of capital controls that became known colloquially as the corralito (literally “playpen”), prohibiting cash withdrawals from dollar-denominated bank accounts unless the deposits were converted to pesos, and limiting withdrawals from peso-denominated accounts to 300 pesos per week.

The public outcry in response to the corralito was fierce, culminating in the resignations of Economy Minister Domingo Cavallo and President Fernando de la Rúa. Their successors expanded the corralito in short order by forcibly converting all dollar-denominated deposits (and debts) to pesos and devaluing the peso from the exchange rate of 1 peso per dollar to 1.4 pesos per dollar. Soon afterward they let the peso float, and several weeks later it stabilized at a rate of about 4 pesos per dollar.

I visited Argentina again a few years after the 2001 devaluation. This time I was in graduate school and we were studying local companies as part of a course in global strategy. The economy had recovered substantially in spite of the government being shut out of global credit markets due to its 2001 default. Most of the companies we visited were bullish on the country’s future.

The highlight of the trip was a dinner that I had with a good friend and former missionary companion. We shared good memories from our time working together and talked about our young families and career plans. He said that one of the lasting legacies of the devaluation was the “brain drain” that occurred as many young, talented professionals left Argentina for other countries. He estimated that a third of the people his age in his neighborhood had emigrated since the crisis.

The following afternoon my classmates and I had a meeting with none other than Domingo Cavallo, the man who had implemented dollar convertibility in 1991 to break Argentina’s chronic inflation and, ironically, had been responsible for the corralito a decade later. I expected that Cavallo would give us a history lesson, perhaps to explain what he did in each case and why. The lecture turned out to be much more useful than I had imagined.

Cavallo spoke for two hours on the perils of inflation, its early warning signs, and how we should manage our personal finances and businesses in an inflationary economy. He began by explaining that inflation is highly asymmetric: It’s easy to create (just fire up the printing presses), but it’s very hard to eradicate. Once people begin to expect inflation, the problem becomes particularly acute: People demand higher wages, which in turn drives up prices, which signals to more people that they should expect future inflation and the cycle repeats ad infinitum.

However, Cavallo said that while this cycle can produce painful inflation on the order of 10% to 20% per year, it cannot produce on its own the type of “hyperinflation” on the order of 100% to 1,000% per year that had plagued Argentina in the past. Hyperinflation is possible only when consumers lose confidence in their local currency and begin to save and trade in foreign currency. Prices are actually quite stable in foreign-currency terms during episodes of hyperinflation. The massive price increases in local-currency terms occur because of devaluation: No one trusts the local currency enough to risk holding it at practically any price.

So, Cavallo warned us, when you see people start dealing in foreign currency, it’s time to buckle up.

In recent weeks Argentina has returned to the international stage, this time as its official exchange rate has fallen to 8 pesos per dollar (and the unofficial black-market rate to 12 pesos per dollar). The reason? The country is running out of the foreign exchange reserves that it needs to sell on the open market in order to buy enough of its own currency to prevent its price from collapsing. Reserves are low because few foreigners want to invest in a country that has repeatedly expropriated and nationalized private businesses, and Argentina’s steep tariffs, intended to keep domestic prices and unemployment low, further discourage foreign trade. When a central bank is creating new money faster than its economy is creating real value, there will always be downward pressure on its currency.

Argentina and the United States are similar countries in many ways, especially considering their parallel economic development leading up to the early 20th century. Their development diverged due to differences in their respective political systems and economic policies. Argentina remains a constant reminder of what the United States would look like under a similar political and economic regime characterized by political cronyism, state-sanctioned monopolies, powerful unions, lawless seizure of private assets, fierce protectionism, endless fiscal stimuli, loose monetary policy, unrestrained borrowing, poor productivity, and high unemployment.

Perhaps the most significant difference between the two countries today is that the United States is allowed to pay its debts in its own currency.

At least for now.

 

 

Why default is inevitable


franklin-delano-roosevelt-laughing-photo-1[1]We seem to be living in perilous times. Parts of the United States government are shut down because the House and Senate can’t agree to either fund or defund Obamacare, whose rollout this month has been an unmitigated disaster. This impasse now threatens the looming deadline to raise the debt limit before the Treasury supposedly runs out of money sometime in the next 48 hours.

Many people around the world fear that the United States might default on its debt if Congress fails to raise the debt limit. The consequences of default would be far-reaching and hard to predict because so much of the world economy is based on the US dollar and the “full faith and credit” of the US government to honor its debts.

In recent days several friends have asked what I think about this situation. My position is that default is inevitable, but I’d be very surprised if it happened this week. Here’s why:

Most of the Federal budget, nearly 60% of it, is taken up by entitlement programs. People who qualify for these programs, like Social Security and Medicare, are legally entitled to their benefits. The government cannot reduce these benefits without changing the law. About 30% of Federal spending is discretionary, meaning that the government can choose to cut it without violating any laws. And interest on the national debt accounts for less than 7% of the Federal budget.

The problem is that current Federal spending is about 24% of GDP (gross domestic product, i.e. the total value of all goods and services produced by our economy each year), whereas the Federal revenue (taxes) are only about 15% of GDP. So for every dollar the Government collects in taxes, it’s borrowing and spending an additional 60 cents.

That borrowing represents 38% of the Federal government’s funding, which is greater than the 30% of the budget taken up by discretionary spending. So even if we slash discretionary spending to zero, we can’t fund both our current entitlement spending and our interest payments without additional borrowing.

This means we’re in pretty bad shape today, but it’s only the beginning of the story.

In the days before Obamacare became law, the non-partisan National Taxpayers Union issued a report on the unfunded liabilities of then-current entitlement programs. In other words, understanding that we’re going to have greater numbers of older, sicker people in the future and fewer workers funding Medicare and Social Security, how much extra money would the Federal government need to have on hand today in order to deliver all the benefits it has promised under these programs for the next 75 years without raising taxes?

The answer: About $46 trillion. That’s about three times the current United States GDP. And that’s before factoring in the Obamacare subsidies that will theoretically help defray the cost of health insurance for lower-income Americans once the websites stop crashing.

The National Center for Policy Analysis published a similar study 10 years ago, before the Medicare prescription drug benefit (Part D) became law, and the numbers were similar. Authors Jagadeesh Gokhale and Kent Smetters predicted that the fiscal funding gap would be around $50 trillion including Medicare Part D. They calculate that the gap could be closed by a permanent increase in payroll taxes by about 15%, or by raising income tax revenues permanently by about 60%.

This entitlement funding gap is frighteningly large, much larger than the value of the national debt that causes so much political hand-wringing. Each year we’re spending more than 8 times as much on entitlement programs as we are on those interest payments that we’re supposedly going to miss if we don’t raise the debt limit. But we don’t have a meaningful national discussion about entitlement reform.

For all this talk about defaulting on our debt obligations, why aren’t we talking instead about the inevitable default on our unsustainable entitlement spending?

Part of the problem is that this conversation necessarily involves math, and a recent study suggests that less than 10% of US adults are likely proficient enough to understand a question of this complexity.

Beyond ignorance, the obvious answer as to why we don’t talk about entitlement reform is that these programs are untouchable. Any attempt to alter them in a meaningful way is political suicide. This feature was admittedly designed into the programs in the form of payroll taxes. As FDR said:

We put those pay roll contributions there so as to give the contributors a legal, moral, and political right to collect their pensions and their unemployment benefits. With those taxes in there, no damn politician can ever scrap my social security program. Those taxes aren’t a matter of economics, they’re straight politics.

Well done, then. 78 years later and still working as designed.

vice-president-lyndon-b-johnson-everett[1]Congress and LBJ amended FDR’s Social Security Act in 1965, adding two amendments that created Medicare and Medicaid.

Medicare, like FDR’s Social Security, is funded through payroll taxes.

Incidentally, some policymakers wanted health insurance as part of the original Social Security Act of 1935, though they feared that including health insurance would jeopardize the entire bill, so it was left out.

In piggybacking Medicare on top of Social Security and using payroll taxes to fund this new entitlement, LBJ guaranteed that Medicare would be just as politically sacrosanct as its forerunner. And 48 years later, Medicare is still as untouchable as ever. It has been amended several times but has always grown in size and scope.

In spite of their shared political knack for creating these fiscal black holes, FDR and LBJ are not the worst villains in this story. That distinction belongs to LBJ’s successor, a man whose infamy seems to have denied him the privilege of a three-letter sobriquet.

nixon-victory-sign[1]Richard Milhous Nixon did not create Social Security or Medicare, but the magnificently reckless monetary policy maneuvers that he undertook to boost his chances of re-election effectively forced central banks around the world to hold the US dollar as their primary reserve asset, making them complicit in anesthetizing the United States to the fiscal pressures of these two entitlement monsters plus profligate discretionary spending, thereby lulling us into a false belief that our government can be exceedingly generous to all without anyone ever having to pick up the check.

When Nixon unilaterally defaulted on our obligations under the Bretton Woods system back in August of 1971, he set us up to run chronic fiscal deficits with no real consequences. And but for a period of fiscal sanity in the late 1990s, we have done just that:

So we find ourselves in the present crisis, stuck between a fiscal rock and a monetary hard place, with massive unfunded entitlement liabilities on one hand and an endlessly-growing national debt on the other. There is no way out but through some kind of default:

  • We could default on our bondholders. One of my professors once said that when sovereign debt reaches unsustainable levels, default is inevitable. And a nation can either default quickly through a restructuring, which is a euphemism for “we aren’t going to pay you back” (see Argentina), or do it slowly through deliberate inflation, i.e. paying off an unsustainable debt with cheaper dollars in the future (arguably the Fed’s present and future course of action). A restructuring of US debt is unfathomable at present as we can (at least for now) easily afford our interest payments. And in spite of what seems like partisan gridlock, the Treasury still has a few tricks up its sleeve before it throws in the towel.
  • We could default on the old, the poor, and the sick. Restructuring Social Security and Medicare seems like a good idea given more than $46 trillion in unfunded liabilities tied to these programs, in addition to the still-unknown cost of the Obamacare subsidies. But by design any of these reforms will entail a steep political cost, and the proverbial chickens likely won’t come home to roost until today’s politicians have retired or died. It may take a threat of (or even an actual) default on our debt in order to compel the government to undertake meaningful entitlement reform. Some low-risk options for restructuring these programs include significant phased increases in the age of eligibility and financial incentives for families to complete end-of-life planning (advanced directive, living will, medical power of attorney, etc.)
  • We could default on taxpayers. Granted, there is no prohibition against raising taxes in the future, but a social contract most definitely exists between taxpayers and the government wherein the government promises fair services in exchange for a fair tax rate. We will likely need some type of tax increase in order to close the entitlement funding gap, and any tax increase will be politically risky. We could theoretically minimize political fallout by increasing taxes on only the so-called “rich”, though this type of default cuts both ways in the tax game and is therefore of limited utility in practice.

We’re in a tough spot, and there is no easy way out. We’re ultimately going to have to restructure both our tax code and our entitlement programs if we want to regain the kind of fiscal and monetary discipline that will allow future security and prosperity. These entitlement and tax defaults will be painful, but they will be much more manageable than a disorderly default on our debt.

The most immediate question, then: Who is in a position to lead us through this transition?

Confessions of an unlikely homeschooler


stack-of-books

My wife and I teach our three children at home. This is a rather generous assertion on my part, as she does most of the teaching. My primary job is to keep the whole endeavor funded. Beyond that I also teach the sciences, higher math, wood shop, physical education, and occasionally music.

Society calls what we do “homeschooling”, a semantically-ambiguous term that generally refers to families who opt out of both public and private schools and take full personal responsibility for the education of their children. But our reasons for doing so are different from those of the millions of other families who follow a similar path. In fact, there are probably as many reasons to homeschool as there are homeschooling families.

In conversations with friends I regularly find myself answering the question, “What made you decide to homeschool?”

My wife likes to say that we started teaching our children as soon as they were born. We helped our kids learn to roll over, sit up, crawl, stand, walk, and run. We taught them to speak, read, and write. We were among a small minority who did not send our kids away to preschool. Instead, we picked up a homeschool kindergarten math curriculum from Saxon.

We soon found ourselves facing a dilemma: Our oldest daughter was a year ahead in math and several years ahead in her reading and writing by the time she was supposed to be starting kindergarten, not because she is a savant or child prodigy, rather because we taught her those things at an early age when she was ready for them. Should we put her with peers her same age so she could “fit in” socially but be bored academically, or should we put her with peers of similar ability and have her be a social misfit among older kids?

Neither of these was an acceptable option, so we rejected them both. We opted to in-source the teaching and relied on church, music, sports, and other networks for social connections to her peer group.

This period of our lives coincided with my time in graduate school, and I enjoyed applying what I was learning there to the questions we were facing at home regarding education. I understood that our daughter was a statistical outlier among her peer group at the time in terms of her math and reading skills, but those aren’t the only two types of intelligence, and in all likelihood she was below the mean in some other areas. By the law of large numbers, the education system in general (and the classroom setting in particular) caters strictly to the mean. So how could an undifferentiated classroom experience possibly be optimal for anyone who is an outlier, positive or negative, in any dimension?

The answer, of course, is that such an experience is not optimal for any student. And that fact is obvious when you consider the way the system is constructed. Any rational system is designed to economize around its most scarce resource. For example, certain parts of our healthcare system are characterized by big, expensive buildings filled with expensive people and expensive equipment. Why do we build these expensive hospitals and make the patients show up and wait around? Because the doctors and the equipment they use are the most scarce resources in the system. We optimize the system to make sure that those resources are fully utilized, and we waste the more abundant and less-valuable resources, like the patients’ time.

Our education system is similar. We build big expensive buildings and fill them with expensive (unionized) teachers. We make the students come to the buildings and do a lot of sitting around, standing in line, waiting their turn, etc. Why? Because we view the teachers as the most scarce resources. We optimize the entire system around the most efficient use of their time. Individual student outcomes are of vanishingly small importance.

It’s the collective outcomes that matter most to the education system. Public schools in the United States are modeled after the system established in 18th-century Prussia, whose purpose was to instill in its citizens the doctrine of social obedience to the King and to produce a steady supply of qualified labor for the bureaucracy, the military, and emerging industry.

In other words, the education system was not designed to produce excellence (positive outliers), rather a predictable mean and narrow standard deviation. Why? Because people who think the same are easier to govern, easier to lead into war, and easier to manage at work. To these age-old insights I will add a modern one: It’s also much easier to market to a population that has been systematized into thinking the same way.

I have a habit of waxing verbose on these points whenever I am asked why we homeschool. After one such conversation a few weeks ago, a good friend sent me the following illustration of a TED talk given by Sir Kenneth Robinson. It’s worth watching because it conveys these same ideas graphically and much more succinctly than I do:

 

So I have come to understand that our fundamental reason for homeschooling our children is because we see unique greatness in each of them that we don’t want the system to destroy in its endless quest to reduce variance. We want our children to think in ways that are not strictly correlated with how everyone else thinks.

And in the present environment, the stakes are so high that we cannot afford to outsource this job to anyone else.

That’s why we homeschool.

The 1%: They’re at it again…


cm-23138-050624abe3a9e6[1]

One percent of Americans now earn a greater share of income than at any time since the 1920s, according to this article posted today. The top 1% of income earners, those who earned more than $394,000 last year, accounted for more than 19% of all income reported to the IRS, while the top 10%, or those who earned more than $114,000, accounted for more than 48% of all income.

I got upset when I read this article. I don’t think it’s fair. And I think something needs to change.

Specifically, I don’t think it’s fair to any of us that the standards of journalism have sunk so low that an AP reporter can get away with publishing a handful of numbers buried in a steaming pile of opinion and pass it off as “news”.

Nassim Taleb rants in his excellent book The Black Swan about how he does not read the news. He relies instead on prices to communicate what’s going on in the world, because prices are purely objective. He ridicules outlets like Bloomberg that try to explain price movements with a narrative that connects them to other events. No doubt you’ve seen the following occur: The market opens lower, and Bloomberg runs a headline “Stocks down on interest rate fears.” And then by lunchtime the market has rallied for some random reason, and Bloomberg changes the headline to “Stocks up on interest rate optimism.”

Of course investors don’t change their mind on interest rates between breakfast and lunch, unless Bernanke happened to make a statement to the press in the meantime. And Bloomberg certainly can’t tap into the thoughts of millions of market participants that quickly with any accuracy. These headlines are absolute rubbish, but we eat them up and come back for more. Taleb attributes this irrational behavior to what he calls the “narrative fallacy”, or our inability to look at facts without trying to come up with a story that ties them together. Everyone seems to want to find a narrative, and every narrative, no matter how far-fetched or even ridiculous it might be, will seem plausible to at least one person.

Let’s apply this lens to the AP article on income inequality. But before we do, I need to rant a bit myself:

I am frustrated with the lack of semantic precision found in most news articles about the economy, particularly the confounding of two concepts that are related but very different: Wealth and income. Too many reporters use these words interchangeably. This lack of discipline (together with lousy education, more on that later) is responsible for spreading a plague of economic and political confusion among the general public.

Wealth is a “stock” variable that describes what you have accumulated. It is expressed in units of money, like dollars. It is your net worth, i.e. the sum of your assets less the sum of your liabilities. It’s the value of your bank accounts, your property, your investments, etc. minus the value of all your debts. It’s the equity on your personal balance sheet. Ideally you want this number to be greater than zero.

Income is a “flow” variable that describes the rate at which you acquire wealth. It’s expressed in units of money per unit of time, like $20/hour or $114,000 per year. Generally speaking, you also want this number to be greater than zero.

But describing personal finance using these two variables is problematic for the popular discourse, because we find it convenient to distill reality, in all of its rich detail, down to a series of false dichotomies in order to simplify the talking points. For example, the terms rich and poor.

What does it mean to be rich? Does it mean high wealth, high income, or both? The answer to this question may seem irrelevant to many Americans considering our labor force participation rate of just 63.3%. There are an awful lot of people who would be happy if they were just a little better off than the status quo. But rich people have high wealth. There isn’t such an easy term for high-income people, so lazy reporters and politicians often call them rich as well. And while high income is correlated with high wealth, it doesn’t necessarily cause high wealth because one can always spend more than he earns.

Even with unambiguous definitions of wealth and income and our best efforts to avoid the narrative fallacy, income statistics can be hard to understand for a number of reasons. So lets go back and walk through that AP article.

We get off to an inauspicious start with the first sentence:

The gulf between the richest 1 percent and the rest of America is the widest it’s been since the Roaring ’20s.

The erroneous use of the word “richest” to describe the concept of income rather than wealth allows the author to foist another subtle lie upon us: That the top income-earners are an exclusive fraternity whose membership has not changed since the Roaring ’20s. You know, a cabal of old rich guys like Charles Montgomery Burns who’ve been waiting for this “Excellent!” opportunity to return to their pre-Depression glory days.

In fact, the composition of the 1% changes dramatically from year to year. In addition to the perennials like Bill Gates, Warren Buffet, and Mr. Burns, the 1% club includes transitory members like the salesperson who had a great year after spending the previous two living on her modest base salary while she built customer relationships in a new territory, and the farmer who had a bumper crop after getting wiped out by drought the previous year, and the entrepreneur who sold a business he had been running on a shoestring budget for several years without taking a salary himself.

These people and many others like them have a lot of lousy-to-mediocre years and a handful of great ones because they take big risks, and sometimes those bets pay off well.

The author continues:

In 2012, the incomes of the top 1 percent rose nearly 20 percent compared with a 1 percent increase for the remaining 99 percent.

The richest Americans were hit hard by the financial crisis. Their incomes fell more than 36 percent in the Great Recession of 2007-09 as stock prices plummeted. Incomes for the bottom 99 percent fell just 11.6 percent, according to the analysis.

But since the recession officially ended in June 2009, the top 1 percent have enjoyed the benefits of rising corporate profits and stock prices: 95 percent of the income gains reported since 2009 have gone to the top 1 percent.

That compares with a 45 percent share for the top 1 percent in the economic expansion of the 1990s and a 65 percent share from the expansion that followed the 2001 recession.

Again, the use of subtle language like “the incomes of the top 1 percent” implies that each member of last year’s top 1 percent got a 20% raise this year. Use of the singular, “the income of the top 1 percent,” would be factually and grammatically correct. We have no way of knowing the average increase for any individuals year-over-year.

The author repeats the same pattern of flawed reasoning and subtle linguistic hints throughout this section, attempting to construct a fictitious narrative of a typical 1%-er during and after the financial crisis using aggregate data, as if, say, the bearish money managers who did well during the crisis were the same people who profited from the market’s recovery in mid-to-late 2009. And to finish this exercise in fallacious reasoning the author compares these imaginary individuals with their younger imaginary selves from 2001 and even the 1990s, apparently to show that if there was such a person who was fortunate enough to have been a member of the 1% club for 23 straight years, he would have done exceptionally well this year by comparison with his measly-but-still-1% performances in past bull markets.

The author goes on to provide a couple of useful numbers that I already included in my introduction, and then comes the following important disclaimer:

The income figures include wages, pension payments, dividends and capital gains from the sale of stocks and other assets. They do not include so-called transfer payments from government programs such as unemployment benefits and Social Security.

In other words, they’ve assumed that the 37% of Americans who have given up looking for work have no income at all, even though we are supporting them with myriad entitlement programs. And these figures also include the 13% of Americans over the age of 65 who are living on Social Security, but we don’t count those payments as income, either. So fully 50% of the population, according to this study, effectively has zero income!

Might that little fact explain why this analysis found such a small increase in the income of this population since 2009? They’re not working!

And now, I present to you the narrative fallacy in all its glory:

The gap between rich and poor narrowed after World War II as unions negotiated better pay and benefits and as the government enacted a minimum wage and other policies to help the poor and middle class.

The top 1 percent’s share of income bottomed out at 7.7 percent in 1973 and has risen steadily since the early 1980s, according to the analysis.

Economists point to several reasons for widening income inequality. In some industries, U.S. workers now compete with low-wage labor in China and other developing countries. Clerical and call-center jobs have been outsourced to countries such as India and the Philippines.

Increasingly, technology is replacing workers in performing routine tasks. And union power has dwindled. The percentage of American workers represented by unions has dropped from 23.3 percent in 1983 to 12.5 percent last year, according to the Labor Department.

The changes have reduced costs for many employers. That is one reason corporate profits hit a record this year as a share of U.S. economic output, even though economic growth is sluggish and unemployment remains at a high 7.2 percent.

Every one of these statements is an opinion with enough facts sprinkled in to make the author sound credible. But there are numerous other plausible narratives that could explain how we got to where we are. Here’s my version:

The distribution of incomes narrowed after World War II as millions of employable men returned home to cities rather than their family farms, swelling the ranks of labor unions as they took private-sector manufacturing jobs that paid better than military service or manual labor on the farm because they created more economic value than these alternative uses for the same labor.

The trend toward urbanization and industrialization continued until the early 1970s, when increased manufacturing automation, information technology, liberalization of trade, and floating currencies forced manufacturing workers to begin competing against machines and cheap offshore labor.

Meanwhile, easy monetary policy began to inflate the prices of financial assets held disproportionately by wealthier Americans and eroded the purchasing power of the working classes due to a politically-rigged Consumer Price Index, which decline in household purchasing power coincided with the rise of feminism to encourage more women to join the workforce.

This increased the supply of labor but not the demand for the same, which further depressed wage growth, which made the two-income household a modern necessity and deprived children of many of the emotional benefits enjoyed by the previous generation, which in turn coincided with the dumbing-down of public education and ultimately condemned a huge segment of the rising generation to chronic under-achievement.

Which is how we arrived where we are today: Greater variance in household incomes than at any time recent memory, and no easy way to fix it.

Especially if we allow such lousy reporting to frame the problem inside a politically-convenient narrative that hints at “solutions”.

What to do with Xbox?


Regarding my last post about carving up Microsoft, here’s a Twitter conversation with a good friend that merits some elaboration:

I believe that Xbox is not immune from the please-your-best-customer plague that afflicts the rest of Microsoft and kills off disruptive innovation. Here’s why:

One of the coolest things that happened during my tenure at Microsoft was the development and launch of the Kinect sensor for Xbox. Using Kinect for the first time — playing a modded version of Forza 3 at the house of a friend who was on the dev team long before the product had a name — was a frighteningly cool experience. It was instantly clear to me that the technology had immense potential to broaden the reach of gaming beyond the traditional target market that skews young, male, and geeky.

My kids were even invited to be Kinect testers before launch. One afternoon we worked our way through a labyrinthine warehouse near the old company store until we found a mocked-up living room. The kids got some snacks and played a pre-release build of Kinect Adventures. They were absolutely mesmerized. The testers were gathering data on the performance of the sensor, and about a week later they called me up and asked me to bring my then-3-year-old back for a second round. “We just fixed a bug that affects only short skinny kids with long hair, and she’s our ideal test case.”

I had high expectations for the product based on these experiences, and the Kinect launch just blew them away. The product was a runaway success, a proud moment for Microsoft when we really needed a hit.

A few months after the launch I got a call from an executive on the Kinect team who wanted to meet to discuss the healthcare market. I paired up with a colleague and we headed over to the side of campus where all the cool kids work.

At the meeting we learned that their phone had started ringing off the hook as soon as the product launched, customers and partners were calling with all sorts of ideas about how to use Kinect in different verticals. Not being a group particularly concerned about customer intimacy, Xbox just told them all “No.” The phone calls soon became frequent enough that the team hired a guy just to answer the phone and tell everyone “No,” which he dutifully did. But being a smart guy, he started keeping a record of who was calling, what industry they were from, and what their idea was before he gave them the obligatory “No.”

After a few months the smart guy found this job pretty depressing. So he created a report for his bosses that showed the distribution of all the people he had said “No” to by industry, to illustrate that maybe they should be paying attention to what these customers were saying. And healthcare was far and away the top industry in his dataset, having 3x more inbound calls than any other vertical.

So the Kinect team had called a meeting with me and my colleague to learn more about what they could do to address the healthcare market.

We took to the whiteboard and started with the forces that are acting on the healthcare system: An aging population, the obesity epidemic, increasing incidence of chronic diseases like diabetes and congestive heart failure, heath reform, business model changes due to the rise of accountable care organizations, population health management, etc.

“We think you should consider that the management of chronic diseases in the elderly is a market that is worth in excess of $100B per year, and it will likely require technology similar to Kinect in order to engage with these patients at home, where caring for them will be least expensive,” we said.

“So… make a game for old sick people?”

“Not a game per se, more like a set of apps that use the Xbox + Kinect hardware to allow healthcare providers and payers to manage expensive chronic conditions at a lower cost.”

“Guys, we’re Xbox. We don’t do old people.”

“Why not?”

“It would ruin our brand.”

“What do you mean?”

“Well, our customers are hard-core gamers. You know, young, male…”

“Wouldn’t a set of chronic disease management apps for a different market segment enable you to increase the install base of your platform rather dramatically and then monetize other content, like TV and movies?”

“I think we’re going to make a fitness game for kids who are hard-core gamers.”

“Why is that?”

“Because the kids who play games the most probably need some exercise. And they already know our brand, so we think they’ll buy our fitness game.”

“So your games made these kids fat, and now you’re going to make another game to help them get in shape?”

“Yeah. Pretty cool, isn’t it?”

Well, no, not really. Taking a magnificent technology like Kinect that has the ability to change the world in seriously meaningful ways and offering it only to your best customers within the context of how you make money today is not cool. It’s picking the low-hanging fruit.

My colleague and I left the meeting disappointed that we couldn’t convince these guys to elevate their vision and accomplish anything more than attempting to undo some of the damage that their product has inflicted on a generation of sedentary kids. We were frustrated that they called and asked for our advice when they already seemed to have made up their minds about what they were going to do.

We soon found out why. As we followed up with the Kinect guys a few weeks later, we learned that the exec who led the discussion had been given a nice promotion to lead a new game studio focused on — you guessed it — fitness games for fat kids.

I’m sure it has been a good career move for him, a low-risk way to get to the next level by offering something novel to his best customers. I just regret that he’ll never get measured against the massive value that he could have created if he had been more willing to pursue disruptive rather than sustaining innovation.

So I believe that Xbox has tremendous potential. They have some great technology and some fantastically talented people. They could reinvent the way we experience media in the living room by combining TV, movies, music, gaming, and web content in innovative ways. They could leverage other consumer technology assets in Microsoft’s portfolio and do some great things. And they could go far beyond that if they are willing to think differently about who their best customers might be in the future.

But I think that they are just as risk-averse as the rest of Microsoft when it comes to business model innovation. Odds are that young, male, flabby geeks will continue to occupy the attention of the people running the business in the foreseeable future,. And if Xbox can’t focus beyond their current customers, they would be better off on their own than as part of Microsoft’s broader consumer strategy.

It’s Time for Microsoft to Break Up


microsoft-org-structure[1]

A lot has happened since my last post about Microsoft. There has been rampant speculation about who should be the next CEO. The company acquired the mobility business of Nokia, perhaps signaling that Stephen Elop is the man to beat for the top job.

But an article in today’s New York Times offers a perspective that aligns most closely with my own: That the company is too big to manage, too unwieldy to be agile, that the Nokia acquisition makes this problem worse, and that Microsoft would be able to compete better if it split itself into smaller, more focused companies.

The fact is that Microsoft will not solve any of its problems by growing. More products, more markets, more features: These go-to plays fueled the company’s past growth. To get itself unstuck, the company now needs to do much less: Fewer strategies, fewer products, fewer features, fewer businesses, fewer employees. Microsoft will need to make a difficult pivot in the near future if it wants to remain relevant, and it’s far easier to turn a small ship than a large one.

So I’ll add my voice to those calling for Microsoft to split itself into several smaller companies that will be easier to manage.

One of the mini-Microsofts might focus exclusively on corporate customers. The winning strategy here is customer intimacy, not product leadership. The company can run a successful business catering to the needs of enterprise IT customers for many years with only modest incremental enhancements to its current set of products, but not if they are undergoing the kind of rapid change necessary to compete effectively in the consumer market. This mini-Microsoft might end up looking a lot like Oracle.

Another mini-Microsoft might focus exclusively on consumers. The goal here needs to be to pursue product leadership, to run faster, to recapture lost mojo, and to establish a thriving developer ecosystem at the low end of the overall software market where customer expectations are relatively low compared to those of enterprise IT. If this mini- is successful, consumers will drag its platform into the enterprise market the way they did with Windows 20 years ago, and they way they are doing with the iOS and Android platforms today. This business will benefit from shedding the Microsoft brand. Its product portfolio might end up looking a lot like that of Apple.

Finally, there will be leftover pieces of the company that will need to be either sold to strategic buyers or spun out on their own. I like the NYT article’s suggestions that Bing would pair well with Facebook and that the Xbox business might make a nice stand-alone company.

There are thousands of ways that a company with hundreds of businesses might be sliced up. What other combinations of Microsoft businesses do you think would stand a chance of success on their own?

Steve Ballmer: In memoriam


Today Microsoft announced that Steve Ballmer will retire from the company within the next 12 months, a move that the market applauded.

It is hard to imagine Microsoft without him. Steve has more energy and passion for the business than anyone else on the planet. He’s a consummate salesman who loves his products so much that he makes other people uncomfortable and doesn’t understand why. He’s plenty smart, but he’s no technologist. And he’s compiled an unenviable track record, having missed every major wave of technology innovation during the past 13 years.

Music. Smartphones. Tablets. App Stores. Search. Advertising. Social. Cloud. Companies that caught these waves have created in excess of a trillion dollars of market capitalization during Ballmer’s tenure while Microsoft has vaporized a third of a trillion of its own value. Ballmer has presided over what may be the most dramatic illustration of the Innovator’s Dilemma that the world has ever seen.

Of course, Microsoft has (or had) a product (or several) in each of these categories that it missed. But the company has been super risk-averse regarding business model innovation. They never had the courage to put their core businesses at risk, to do anything that might disrupt Windows or Office. So these new and potentially-disruptive products never took off because they were over-featured (Excel on a phone, anyone?) and too expensive for all but the most demanding of the company’s current customers. Recent attempts at business model changes (Bing, Azure, Surface) have failed to gain traction as the company repeatedly waited until a competitor had proved a new business model and then attempted a “fast follower” strategy, though too late to prevent the competitor from creating barriers to entry through economies of scale or network externalities.

And so Microsoft under Ballmer became slow, lacking in confidence, and perennially stymied by its competitors. Many talented employees left for greener pastures, which precipitated a vicious cycle of management frustration, more missed opportunities in the marketplace, and more turnover. Yet the core businesses grew at modest rates for a time under Ballmer’s stewardship, playing to the confirmation bias of the executive team that the emperor was in fact wearing his amazing new clothes and the critics were just too stupid to understand anything.

I scored a front-row seat to this show when I joined an internal startup, the Health Solutions Group (HSG), focused on creating new products for the healthcare vertical. My job was to complete the business model for this offering. Given our market, our assets, the value proposition of those assets in the jobs that customers were willing to hire them to do, our channels, partners, cost structure, and commitments to our internal “investor”, what was the optimal way to monetize our offering? How would our assets be packaged and licensed, and how would these packages be priced?

HSG had its own sales force and could go to market any way it wanted, at least for a while. But our presumed destiny was to land somewhere within the bigger Microsoft machine, and for that to happen our packaging and licensing needed to conform to one of a handful of models administered by a group inside the company called World Wide Licensing and Pricing (WWLP).

WWLP is responsible for licensing and pricing a magnificent amount of business, around $30B per year across all the divisions of the company, so it focuses heavily on operational excellence. It can support only a finite number of business models and each of these needs to be applicable across as many products, segments, and geographies as possible. Microsoft therefore prices its Enterprise products by “servers” (physical or virtual machines that run products for many users) and “client access licenses” (or CALs, licenses for each user that touches a product). These pricing models are arcane, highly abstract, and only loosely related to how enterprise software creates value. Naturally, customers hate this way of doing business.

So how do you take a product that by design can do almost anything (the topic of a future article) and monetize it through such a rigid and odious packaging and licensing framework? You can’t. Well, you can try, but it won’t work. We were required to shoehorn a magnificently innovative product that needed a commensurately innovative business model through a set of resources and processes that represent the lowest common denominator in the enterprise software business. The salespeople didn’t understand the result, nor did the customers, and we didn’t sell much of it. The need to conform to an off-the-shelf business model in an organization of Microsoft’s scale was a major contributing factor to our failure to thrive.

As I began to foresee the outcome of the HSG adventure, I wanted desperately to believe that our experience was anomalous, that somehow this ignorance of business model innovation was not a result of the company’s leadership. A group of people this smart and this successful at having disrupted the computing business would not be falling into the Innovator’s Dilemma themselves, would they?

I regret that I did not seize the opportunity to voice these questions to Steve directly. The only times he and I were in the same room, apart from all-hands meetings, was early in the morning at the gym where he was either galumphing on a treadmill or floating in a whirlpool in the locker room.

Once I passed him on the way into the gym at 5:30am on a Wednesday. He was leaving and looked intensely serious. I smiled and said, “Good morning, Steve.” That was all I could muster at the time. “Hey,” he replied.

In retrospect I wish I would have stopped him and said, “Steve, I’m frustrated. We have so many smart people here. We work very hard, and yet we’ve missed every major wave of innovation in the past decade. What’s going on?”

A meeting scheduled a few days after this chance early-morning encounter provided perspective on how Steve would have answered my unasked question. This meeting was part of a “listening tour” organized by our corporate head of HR, who reported directly to Steve. She scheduled a stop in my building and I showed up early to get a good seat so I could ask this question about innovation.

The only problem was that once the meeting got started, someone beat me to the punch. A woman from elsewhere in Microsoft Research asked my question, almost verbatim. Our head of HR was no typical HR person. She was a developer, someone who had run some big businesses at Microsoft before taking on the HR role, and she was one of Ballmer’s inner circle. I awaited her response in eager anticipation with the understanding that she would provide a window into his thinking on the matter.

The answer just about knocked me out of my chair. I didn’t record it, so the following is not a direct quote, but her response went something like this:

Well, you know, that’s a really good question. It is true that we have missed quite a few opportunities over the years. But what you have to understand about this business is that it’s a very large business, and it generates some very nice dividends. And we really like those dividends, so we have to protect the parts of the business that generate the most profit.

Take search, for example. Could we have invented search in, say, 1997? Of course we could have. But that would have taken time and money. It would have taken a substantial amount of money to come up with search, let’s say $100-150 million. And what you have to think about is, how much better off are our core businesses, Windows and Office, because we invested that $150 million in them in 1997? They are much better off today than they would have been if we had spent that money inventing search.

So have we missed opportunities to innovate? Absolutely. And we will probably miss some more in the future. But we’re doing the right thing by continuing to invest in our core businesses.

Now, in general I believe that there are infinitely many wrong answers to any question, just as infinitely many lines can be drawn through a single point on a plane. But sometimes there is one answer that is precisely wrong, just as there is only one line through a given point that is perpendicular to another certain line in the same plane.

And if there was one answer to my question on innovation that was precisely wrong, this executive had just given it. I was mortified. Shell-shocked. Nonplussed. I suddenly realized that I was watching the Innovator’s Dilemma playing out painfully in real time, and that the problem went all the way to the top of the company.

That answer, and the vital decisions made every day regarding strategy and culture that surrounded and enabled it, diverged from everything I know to be true about how to build a business that creates real value for customers and shareholders, one that succeeds in the long run.

Working at Microsoft meant I would either have to deal with the constant stress of these conflicting systems of belief about innovation, or I could capitulate and adopt the company’s way of thinking. Neither of these were acceptable. I had to get out.

I committed to myself that I would do more diligence on my future employers and stay away from companies that have evolved “antibodies” like this to fight innovation. I haven’t perfected this art of due diligence yet, but it’s getting better with each new iteration.

And so, as the Microsoft board commences a search for the next CEO, I humbly offer one bit of advice: Find someone who is a true innovator, one who has walked the walk both as a developer of products and a developer of disruptive business models. Microsoft’s future success will not lie in imitating today’s market leaders any more than Google, Apple, and Facebook disrupted Microsoft by imitating what goes on in Redmond.

If Werner Heisenberg Could See Us Now…


HeisenbergWe are bombarded with marketing messages every day, at every turn. These messages are carefully crafted, meticulously tested, and precisely delivered in order to maximize their influence on our perception and behavior.

Most of the time consumers don’t stand a chance against marketers. There is great asymmetry in the information available to these two classes. Marketers spend much time and money trying to learn how consumers think, whereas fewer and fewer consumers seem to be capable of thinking at all.

But every once in a while I find a company that makes a marketing mistake so flagrant and egregious it actually gives me hope that consumers might see right through it. Here is an example:

A recent article on a company blog touts a survey (done by the same company) that claims 80% of doctors surveyed believe that “virtual assistants” will significantly change the way that doctors do their work within five years.

The problem, of course, is that in a healthcare context no one knows what a virtual assistant really is or does. And the world’s best-known virtual assistant is used daily by only 33% of its customers, has sites devoted to its endless gaffes, and has been said to possess the intelligence of a hammer.

So how could 80% of physicians feel so ebulliently optimistic about something that doesn’t yet exist and whose nearest ancestor is barely useful?

There is an Uncertainty Principle at work in marketing that is not well appreciated by most consumers (and apparently some marketers): You can educate your customer about a new product, and you can measure their perception of your new product, but you have to be careful about trying to do both at the same time.

Why? It’s called priming, and it’s one way that marketers manipulate survey results, whether intentionally or through ignorance.

If your customers know nothing about your product before you survey them, then you naturally want to educate them a bit before you ask them what they think. You’ll tell them all about your vision and the great things that your product will eventually do. But in doing so you are priming them. Whatever they tell you next will be strongly influenced by the proximate experience of learning about your vision. If you create a positive learning experience, they will tend to respond positively even if your vision has only a tenuous connection to reality.

The correct way to measure perception is to do it right out of the gate, at the beginning of the survey instrument, before you’ve perturbed the respondent in any way. Otherwise you’re not measuring reality, you’re measuring a survey experience that is likely more ideal (better) than your product.

So if a survey claims that 80% of customers are excited about a product that doesn’t exist, the survey probably has a priming problem.

Let’s assume for a moment the contrary position, that the survey methodology is sound. Now if a company had a crystal ball that accurately predicted future customer preferences, why would they shout its output from the housetops? Wouldn’t they just build that irresistible product and corner the market before competitors could respond?

Of course they would. So this survey obviously couldn’t connect the dots between perception and intent to purchase. Which means this whole exercise is about nothing more than creating the perception of a positive perception about a product that no one really knows about because it doesn’t exist.

Ironically, Google served up the following ad (Google ads on a company blog? Really?) at the bottom of the post in question:

And if Google can deliver such a perfectly relevant ad when it detects unadulterated marketing BS on a company blog, maybe there is some hope for the BS detectors in consumers’ brains after all.

Gun Control, Part 3: Back to the Present


This is the third installment in a series on the gun control issue. In our first episode we examined data that demonstrate a correlation between fewer guns and an increased variance in the homicide rate. In the second we explored game theory as a hypothesis to explain this phenomenon. This time around we’ll look into another hypothesis, one that can explain why past gun control laws have failed to keep guns out of the hands of criminals and what this means for future legislative efforts.

To begin, we have to go back to school. Way back. You’re a four-year old living in the Bay Area in 1972. You are friends with a girl whose dad is a professor at Stanford. His name is Walter Mischel, and you are about to help him understand something that will inform our conversation on this issue.

One day your parents drop you off at Dr. Mischel’s lab for play time. After some opening activities, you find yourself alone with a researcher, sitting at a kid-sized table in an otherwise empty room, staring down a giant marshmallow. You tell the researcher that you like marshmallows very much. She smiles and says that the marshmallow is yours, and you can eat if you want to.

But wait, there’s more: She says she has to leave the room for about 15 minutes, and if you haven’t eaten your marshmallow when she returns, she will give you another one, too! The researcher exits the room and you are left alone with your marshmallow, weighing your options.

You’re a precocious little preschooler, future Stanford material, so you calculate your rate of return on this trade:

  • You have one marshmallow
  • If you invest that marshmallow for 15 minutes in this grad student’s scheme, you will earn a second marshmallow
  • Your return on investment, therefore, is 1 marshmallow/1 marshmallow = 100%
  • Your rate of return is 100%/15 minutes, or a whopping (24 – 1) x 100 = 1500% per hour! At that rate you’d score more marshmallows than there are atoms in the observable universe in a little more than three days, assuming you could keep the grad student on the hook that long.

Fifteen minutes later the researcher returns and sees you rubbing your little hands together with glee, marshmallow still on the table. “How about double-or-nothing?” you eagerly ask. You’ve decided that you don’t need Stanford after all. You’re on your way to starting your own hedge fund!

Your less-strategic friends didn’t fare so well in the same study. A few of them ate the marshmallow right away. Others made an effort to delay eating the marshmallow but just couldn’t pull it off: Fewer than 1/3 of the study participants were able to wait the full 15 minutes and earn the second marshmallow.

You can read more about the marshmallow experiment here.

Let’s jump ahead a few decades and play a grown-up version of this game, called Zero-Coupon Bond. It works like this:

I need to raise some money, and you happen to have some money. I promise that I can pay you $100 one year from today, but I can’t pay anything until then. How much money will you lend me today in exchange for $100 one year out?

Let’s leave creditworthiness out of the question, I have plenty of assets for collateral. The real issue for you is the opportunity cost of your money, i.e. the return you could make on the money you lend me if you were to invest it somewhere else.

Let’s say you offer to lend me $90 today in exchange for $100 a year from now. I calculate that the cost of that capital is ($100 – $90) / $90 = $10 / $90 = 11.1% per year.

“Dude,” I object. “That’s, like, 1100 bips higher than the 1-year T-Bill rate. What kind of friend are you?”

You admit that the rate is a bit loan-sharky. You blame your experience at Stanford 41 years ago for your unrealistic expected rates of return. We agree to split the difference: You loan me $95 today, I will pay you $100 a year from now. That’s an interest rate of $5 / $95, or 5.26% per year. Not great, but at least I didn’t have to pay an entire universe of marshmallows.

We call this a zero-coupon bond because it is a bond (a promise to pay) that features no coupon payments, regular payments of interest (like you pay on your mortgage) before the bond matures. The principal ($95) is paid back with interest ($5) in a single payment at maturity. And we like zero-coupon bonds because they are really simple instruments that let us experiment on the relationship between present value and future value. They’re the financial equivalents of lab mice.

So here’s a thought experiment: What if you had priced the bond at only $94? Maybe I wouldn’t have sold you my promise to pay. What if I had asked $96? Maybe you wouldn’t have bought it. It turns out that $95 was a very special price for us: You and I were both at least indifferent to $95 today or $100 a year from now, and so the deal was done.

But it’s not $5 that are important, rather the interest rate those dollars represent. This rate, 5.26% per year in our example, helps us establish a relationship between present value and future value. In finance we refer to this as a discount rate, the rate at which we discount future value to bring it back to the present so we can compare what we spend today with what we will earn in the future, or vice-versa. The higher the discount rate, the lower the present value of future cash flows.

And this discount rate has a special place in decision science. In a business the discount rate for future cash flows is the firm’s cost of capital. A business that raises capital at a cost of 12% will not undertake projects that yield future returns of less than 12% per year if its managers are rational. The business is indifferent to projects that yield exactly the discount rate and it will generally consider investing in projects whose yield exceeds the discount rate. Managers sometimes refer to the discount rate as the “hurdle rate”, because it is the rate of return that an investment needs to “clear” in order to be considered.

It turns out that each of us humans is running around with our own individual hurdle rate in our head. We are faced with a constant stream of decisions that trade off future vs. present rewards:

  • Do I sleep in, or do I wake up and go to work? Sleeping in pays off now, work pays off later.
  • Do I have dessert? That would definitely pay off now, but skipping it generally pays off later.
  • Do I smoke the next cigarette? Smoking might pay off now, not smoking pays off later.
  • Do I lie, or do I tell the truth? Lying pays off now, telling the truth pays off later.
  • Do I contribute to my 401(k) with each paycheck, or do I spend that money on entertainment? You get the idea.

For those decisions that clear your individual hurdle rate, you tend to choose the future reward. For those that don’t clear the hurdle rate, you tend to choose the present reward.

So what is the range of hurdle rates that we encounter in society?

It’s huge. Frighteningly so.

Recall the marshmallow experiment above: A return of the entire universe in 3 days failed to clear the hurdle rate of more than 2/3 of preschoolers. Children in general have incomprehensibly high hurdle rates for future rewards as most parents will attest.

Drug addicts also have high discount rates when compared with the general population. Many acknowledge that they take enormous risks in order to get their next fix, then do it anyway, over and over again.

Persons who are mentally ill exhibit impaired decision-making similar to that of substance abusers.

And these three classes of people are disproportionately represented in the US prison population:

It seems safe to assume that that the discount rate among those who end up incarcerated is quite high, meaning that criminals likely to choose present rewards over future rewards, every time.

Here’s an illustrative example:

Going back to our robbery vignette from episode 2, let’s consider that you have $100 in your pocket and I have nothing in mine. I can choose to rob you and get $100 right now. I know that I have a pretty good chance of getting your money even if you put up a fight or try to flee.

I also know that if I commit a robbery, I run the risk of getting caught and going to jail for, say, a year. For the sake of easy math, I’m going to value my freedom at $100,000 per year, or the opportunity cost of all the fun I could have during that time as a free man.

So assuming I pull off the robbery and get caught, my payout would look like this:

Gun Control 3-1

I get $100 today and lose $100,000 over the next year. Pretty lousy deal at face value, isn’t it? Things don’t even look very good in present-value terms for a rational person whose discount rate is, say, 10%:

Gun Control 3-2

The opportunity cost of the first year of future incarceration is discounted by 10% (divided by 1 + 10% = 1.1)  to arrive at a present value of -$90,909. The “net present value” or NPV is the sum of what I get immediately ($100) and the discounted future value of what that decision costs me over time ($90,909). So the NPV of a decision to rob you is $100 + -$90,909 = -$90,809. If I think like most adults, there’s no way I’m going to risk the robbery. Getting caught would be just too expensive.

But let’s use this model to ask a rather interesting question: How high would my discount rate need to be to justify the robbery? In other words, how much would I have to discount the future punishment in order to make robbing you today seem like a good idea?

It turns out that if my discount rate was 99,900% per year, I’d be indifferent to robbing you given the payouts above. That rate seems astronomically high, doesn’t it? If a person with that kind of discount rate was your lender in the Zero Coupon Bond game above, they’d insist on lending you only $0.10 today in exchange for a $100 payment one year from now. If a business used that discount rate to allocate capital, it would invest only if it could double its money in about a month.

But a rate of 99,900% per year equates to about 2% per day, or 750 times less than the discount rate of most preschoolers as measured in the Stanford marshmallow experiment. The disturbing conclusion is that this simple crime could very well pay off in NPV terms for someone with the discount rate of a child, or a drug addict, or one who is mentally ill. In other words, for precisely the classes of people who tend to end up in prison in the United States.

We who consider ourselves to be rational adults struggle to understand this fact, but that doesn’t stop it from scaring us. And we respond like we’re dealing with rational adults: If 1 year in prison for robbery doesn’t deter the crime, let’s make it 2 years.

But how does a criminal view another year in prison? We’ll keep the other terms (amount of money stolen, opportunity cost of a year in jail, discount rate) of the deal the same:

Gun Control 3-3

Wait a minute, what happened here? The NPV is now $100 + -$100 + $0 = $0, and the perpetrator is still indifferent to the crime!

Perhaps we “rational” adults forgot that discount rates, like other interest rates, compound exponentially over time. We discount the Year 1 punishment back to present value by dividing by 1 + the discount rate, or 1 + 99,900% = 1000. We calculate the PV of the second year of prison by dividing its opportunity cost by 1 + the discount rate squared, or 1,000,000. And for the third year we will divide by 1 + the discount rate cubed, or 1,000,000,000, etc.

So if the perp’s discount rate wasn’t big enough to discourage the crime given a 1-year sentence, adding more years isn’t going to make a bit of difference. You could threaten him with an automatic life sentence, and the crime would still pay off in NPV terms.

The threat of additional incarceration does nothing to dissuade those who are already predisposed to view crime as a good investment. This idea is counter-intuitive for those of us who have reasonable discount rates and are therefore motivated to stay out of jail, we naturally view obeying the law as a better investment. Incarceration thus becomes a means of removing the high-discount-rate persons from society. And there sure seem to be a lot of them lately:

What’s causing this? And what can be done to change the status quo?

First, we need to recognize that even those with the highest discount rates for future consequences cannot discount immediate ones. Consider the internet café armed robbery video we discussed in the last episode: The bad guys viewed robbing the café and its patrons as a good investment in spite of the high likelihood that they would be incarcerated. However, they quickly (and dramatically) changed their minds when faced with the threat of immediate death. Their discount rate didn’t change, but the consequences were moved out of the future and into the present where the discount rate is irrelevant. Criminals respond better to immediate consequences than to future ones.

Second, we need to recognize and begin to address the futility of the growing mountain of new laws that are aimed exclusively at people who have no intention of obeying existing ones. These legislative actions seem to be popular and therefore help our officials justify their bids for re-election, but do they do any real good? And how do we calculate the cost of already-law-abiding people having to navigate an endless web of rules?

Finally, we can go back to the Stanford marshmallow experiment and internalize a few of its findings from follow-up studies of the same participants:

  • Having a low discount rate in preschool (i.e. tending to choose future rather than present rewards) was correlated with being described by parents and peers as more competent ten years later.
  • Choosing future rewards in preschool was correlated with achieving a higher SAT score in high school.
  • A 2011 study of the same participants indicates that the ability to choose future rewards over present ones remains with a person for life. Brain imaging showed key differences between the marshmallow-eaters and non-eaters in areas linked to decision-making and addictions.

Most importantly, a similar study in 2012 found an underlying factor that significantly affected the subjects’ ability to choose the future reward: The participants were divided into two groups, one that was primed with a broken promise before being presented with the first marshmallow (the unreliable tester group), and one that was primed with a fulfilled promise (the reliable tester group). Subjects in the reliable tester group went four times longer before eating the marshmallow as compared with subjects in the unreliable tester group.

Whoa there, pardner.

Are you saying that the way a preschooler makes decisions is correlated with the way he will make decisions as an adult?

Yes.

And are you saying that these behaviors in preschool ultimately predict the way his adult brain will work at the cellular level?

Yep.

And you’re saying that, while he’s still in preschool and this behavior is not yet hard-wired into his neurons, this tendency to choose future rewards — that will make him more competent, more successful, and even keep him out of jail — can be increased by something as simple as keeping your promises to him?

That’s exactly right.

But none of these facts will help anyone get re-elected. And so this time around we’ll get a new set of gun control laws, which will be obeyed by the law-abiding citizens and completely disregarded by the criminals. All perfectly rational, of course.

Follow

Get every new post delivered to your Inbox.

Join 609 other followers