Skip to content

The Engineer’s Dilemma: Options for Graduates


One of my favorite books is Robert Pirsig’s classic, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values. In it Pirsig states that many times when we are faced with what appears to be a dilemma, a choice between two mutually-exclusive paths in which neither is optimal, the best course of action is to reject the dilemma and choose a third path.

bulls[1]As he says, if you are face to face with an angry bull and given the choice of being gored by either the left horn or the right, choose neither. Choose instead to punch the bull between the eyes, or perhaps lull it to sleep, or come up with a better plan in the moment. Choose anything but one of the offered paths that is certain to end in a bad outcome.

Engineers and other STEM workers are faced with a similar dilemma: Do they stay the course in the face of planned obsolescence, or do they walk away and try something completely different? Those are the two horns of the Engineer’s Dilemma. But this is a false dilemma, there are many other options:

1. STEM workers and those who understand planned obsolescence can advocate for meaningful changes to the way high-potential talent is managed in their companies. This is not an easy task, and success is far from guaranteed, but the probability of successfully instigating and even leading this change is greater than zero.

2. High-potential STEM workers can seek out new employers who have a culture that is more conducive to sustainable growth in both responsibility and rewards. This search is much more easily accomplished when it is planned in advance and not done under duress. A change of employment can be difficult, even life-altering in the short run, but if done well it will be a tremendous benefit over time.

3. All STEM workers can invest proactively in training, many times courtesy of their employers, to round out their skills in areas that will enable them to succeed in adjacent and potentially lucrative career paths like marketing, product management, sales, business development, and entrepreneurship. STEM graduates are particularly well-suited for many of these due to the rigorous education they have already completed. Training for a new function is not always a matter of going back to graduate school, rather acquiring the right mix of knowledge, practice, and coaching to succeed in navigating a planned step into an adjacent career path.

What do you think? Share your insights below.

The Engineer’s Dilemma: Introduction


img_2403[1]I’m an engineer. For as long as I can remember I have loved to build new things and take old things apart. As a kid I had an electronics kit and a chemistry set and preferred reading the encyclopedia to playing football after school. I taught myself how to program our Apple II+ while I was still in grade school, and by the time I reached high school I was proficient in several programming languages and spent my summers writing, deploying, and maintaining a CRM system for the family business.

When it came time time to go to college, my biggest decision was which field to study. This was long before the term STEM (Science, Technology, Engineering, and Mathematics) was used to describe the interests of people like me, but I could have gone in any of those directions as I entered the university. I ended up graduating with a degree in Computer Systems Engineering, a perfect blend of electrical engineering and computer science that prepared me exceptionally well for my career thanks in large part to my undergraduate mentor, Professor David Pheanis.

But I no longer work as an engineer. Ten years ago I made the leap to product management inside the small health IT company where I had spent the previous decade writing software. That move ultimately led me to graduate school at Wharton and a series of product strategy jobs in bigger companies. It’s been an amazing and rewarding journey so far.

It turns out that I’m not alone among STEM graduates: A recent study estimates that in the United States fewer than 1/3 of college graduates with STEM degrees work in STEM-related jobs. The authors seem to use this study as evidence that the United States does not need to liberalize its immigration policies in order to address an oft-reported-but-apparently-nonexistent lack of qualified workers to fill these jobs. I’m not going to wade into the immigration debate because I think the authors are missing some more important questions:

What’s happening to all the STEM graduates, and why? Where are they going?

What are the consequences of so many highly educated, well-qualified people leaving their respective fields of study?

What can STEM graduates, their employers, and universities do differently to avoid some of the negative effects of this trend?

To learn more, read on.

The Engineer’s Dilemma: Employers and Planned Obsolescence


For the majority of STEM graduates who end up working outside their chosen fields of study, the seeds of the Engineer’s Dilemma are sown in their university experience. But they are nurtured and come to full fruition as graduates begin their post-university employment.

104045.strip.sunday[1]

Employers exist to create profit, and with good reason. Profitable companies have time and money to spend on a whole host of good things. Unprofitable or marginally-profitable business are miserable places to work. We can’t fault the pursuit of profit alone for the course that some managers choose to take along the way.

There are two (legitimate) ways to increase profit: Growing revenue against a fixed cost basis, or reducing the cost basis against a fixed revenue stream. The former seems to be an order of magnitude more difficult than the latter. And so risk-averse managers “focus on the bottom line” by “taking cost out of the business”. Sometimes this means finding more efficient ways to do things, like changing the way a factory runs to produce less waste, or improving the design of a product to require fewer parts.

Other times, and particularly in businesses dealing with intellectual property where the majority of the cost basis is people rather than things, the only way to take cost out of the business is to get the job done with fewer people or with people who are willing to work for less money. And it turns out that STEM graduates are disproportionately likely to work in these types of businesses.

It should not surprise us, then, that according to the study we cited earlier the average compensation for all STEM jobs has been flat for the past 15 years on an inflation-adjusted basis. This doesn’t mean that STEM graduates haven’t received a raise in the last decade and a half. It means that companies have become very adept at taking cost out of their businesses.

There are two effects buried within this trend that each merit our attention:

1. STEM workers are still getting merit and promotion raises. Recent graduates start their careers well below average compensation and receive annual pay increases commensurate with performance. At some point their compensation reaches the average and then crosses over to greater than average. There are about as many STEM workers who make more than the average salary as there are who make less than average.

2. The use of outsourced labor at lower rates has increased in the last 15 years. This phenomenon effectively puts a lid on STEM salaries regardless of performance. Once a worker’s salary hits the ceiling, it makes financial sense to outsource or eliminate that person’s job even if doing so requires back-filling with more than one worker.

In other words, there is planned obsolescence built into the career path for nearly all STEM graduates. The only question is how soon it happens. High performers who realize annual increases in pay in the range of 7-10% above the rate of inflation will hit the salary ceiling in a few as 10 years. The middle third of workers who realize annual increases of 4-5% greater than inflation will top out in about 15 years. And the bottom third of workers whose pay increases average 3% or less above the rate of inflation will hit the ceiling in 20-25 years.

The implications of this phenomenon for employers are profound. The first to be affected are the best and brightest. They are used to making big contributions and receiving rewards commensurate with the impact of their work. As soon as they get the sense that they might hit the salary ceiling they will move on to greener pastures, either to another function with more lucrative rewards or another company that has figured out how to keep its superstars from topping out on their compensation. These top performers know that there is no point in going back to school for a STEM graduate degree, and they are often viewed as too young for leadership positions or don’t aspire to management. The optimal thing for them to do once they near the top of the pay scale is to move on.

The middle third of workers hit the pay ceiling at the right time in their careers to join the ranks of management. They are generally the sharpest people on staff at all but the most competitive companies, so they appear to be distinguished in both their talent and their experience. The problem with this group occupying management positions is that they tend to make life unbearable for the younger, more talented workers below them. A middling manager will try to guide a high-potential worker down the same career path that worked for the manager, and to the high-potential this is tantamount to career suicide. High-potentials recognize and can’t endure the incompetence they see in these managers. They run for the exits, which may be a win for both the manager and the high-potential in the short run but is tremendously costly for the company in the long run.

The lower third of STEM workers hit the pay ceiling too late in their careers to make a substantive career change, and they may not have seen planned obsolescence coming at all. These workers usually end up unemployed with few options.

Employers can make simple and substantive changes to the way they reward their employees, particularly their best and brightest, that will end the vicious cycle of talent flight and increase the competency of managers. Better talent and better management will attract even more high-potential talent and create a sustainable competitive advantage for the company.

But not all companies will understand how to create a system of rewards that allows talented STEM graduates to thrive. In such cases graduates can take charge of their own destiny and thrive in the face of planned obsolescence.

Read on to learn more.

The Engineer’s Dilemma: The Role of Universities


The majority of US college graduates who hold STEM degrees no longer work in their fields of study. These millions of career changes create significant costs for STEM graduates and their employers, and they hurt society as experienced workers move on to new careers and are replaced by less-experienced ones. To understand this problem we need to start at what has been the beginning of STEM education: The university experience.

College5[1]

Universities generally do a poor job of preparing STEM graduates for work in their chosen fields. The modern university system is characterized by strong incentives for faculty to get tenure, which they achieve through publishing research rather than teaching. Undergraduate courses are among the least desirable teaching assignments, and since tenured faculty have the first pick of which courses they want to teach, assistant professors are disproportionately represented among faculty assigned to teach undergraduates.

And so the undergraduate experience is characterized by inexperienced instructors who have no incentive to become good at teaching. Even if these junior faculty wanted to focus on teaching and preparing STEM undergraduates for the workforce, how could they do so when they have little to no experience working in industry?

Beyond the problem of inexperienced faculty focused on getting tenure, things don’t get much better. Prestige among tenured faculty is determined by grant funding and publication, activities that compete directly with industry for the best and brightest students. Many graduate students ostensibly begin their death march to a Ph.D. with visions of joining the ranks of their tenured mentors, but fewer than 10% of STEM Ph.D. graduates end up landing university teaching positions and only a fraction of those get tenure. The rest end up back in industry with very little to show for their five-plus years of hard labor: There is only a small wage premium on a graduate degree in engineering or computer science vs. a bachelor’s degree plus an equivalent number of years of work experience, and that wage premium might vanish entirely if we were to limit the cohort of students with bachelors degrees to those who actually chose to forgo graduate school.

The university system has apparently evolved to do nothing more than elevate and replicate itself. It takes in large numbers of undergraduate students to accomplish this mission, and most of these will necessarily end up in jobs outside academia. The welfare and long-term success of students who are not destined for academic tenure is merely an afterthought in a system that is optimized around the fraction of a percent of undergraduates who will eventually become tenured professors.

The tenure system and minimal attention given to all but a handful of undergraduates ends up hurting universities in the long run. Fewer graduates are realizing an economic return on their investment in education as tuition costs have risen dramatically over the past 20 years. Graduates who are saddled with debt and disillusioned about their career prospects are not going to be active in alumni networks or fundraising, especially in the case of the many poorly-prepared STEM graduates who are destined to hit a salary ceiling and change paths mid-career.

A university that fails to prepare STEM students for what lies ahead in their careers may be doing real damage to its brand and its future bottom line. But STEM graduates and their employers experience similar consequences much sooner.

Read on to learn more.

Try homeschooling this week, risk-free.


r-SNOW-DAY-large570[1]

My wife and I homeschool our three children. We started when they were very young, so we never really faced the dilemma experienced by many parents in the public-school system who feel like they should homeschool but wonder whether they can make it work for their family.

But we have worked with many families over the years who have struggled with this dilemma. Our experience is that parents who try homeschooling find that it not only works for their family, it works much better than they had imagined it would.

If so many parents are aware that homeschooling their children is an option, if they see it working well for other families like theirs, and if they even feel that they should give it a shot, why do so many parents hesitate to try homeschooling?

I think it’s all about risk.

I’ve spent much of my professional life helping companies figure out why more customers aren’t buying their products. Many times the product isn’t the problem. I’ve learned that even a superior product with a “no-brainer” value proposition — one that literally prints money for a customer — will struggle if the company’s business model forces the customer to take on too much risk.

Homeschooling is no different. Research has consistently shown that homeschooled children score significantly higher than those who attend public schools on a battery of standardized tests regardless of factors such as gender, household income, education level of the parents, and homeschool regulation at the state level. But the risks associated with a decision to homeschool can outweigh even these well-documented rewards and lead parents to perpetuate the public-school status quo.

Parents perceive all sorts of risks in the decision to homeschool their kids:

  • What will my friends, neighbors, extended family, or even my spouse think of this decision?
  • How will I manage teaching so many subjects to kids of different ages?
  • Where will I find a good curriculum?
  • What about math? I don’t think I’m good at math. How will I teach them math?
  • What if my kids turn out to be socially awkward like that one other family?
  • What if I can’t stand to be around my kids for that many hours every day?
  • What if I fail? What if I ruin my kids because I’m not good enough as a parent or teacher?

Parents who choose to homeschool will have to manage all of these risks in the long run. The key is to find a way to get started without having to manage all of these risks immediately, before any of the rewards are evident.

And thanks to a particularly nasty winter storm, this week affords parents across much of the country a unique opportunity to try homeschooling risk-free. Think about it:

Schools are closed.

Parents are home from work.

Power may be out. If not, pretend like it is for a couple of hours. At least turn off the TV and the video games.

Sit down with your kids and explain that you’d like to do a bit of work together before turning them loose to play outside in the snow. Have them review their most recent math or writing assignment with you. Or watch a Khan Academy video together. Or memorize a poem, write a letter, or read a short story. Go to the library. Or read a relevant Wikipedia article on a topic they are studying in science or history.

What you do together doesn’t really matter at this point. Do something together that leaves you both feeling good about the experience and a little bit smarter than when you started.

You don’t even have to tell your kids, or your spouse, or your friends, or anyone else what you’re doing. At least not yet. A few of these baby steps will work wonders for your self-confidence and your ability to answer the questions that will inevitably follow.

Take advantage of this risk-free opportunity to convince yourself that you are more than capable of teaching your own kids. You have nothing to lose!

It’s time to cry for Argentina (again)


ImageMy experience with Argentina goes back nearly two decades to when I spent two years as a Mormon missionary in the Patagonia. I learned that the Argentine people are wonderful, the food is exquisite, the scenery is breathtaking, and the economy is never more than 5 years away from being a complete disaster.

While Argentina is now plumbing the depths of the Index of Economic Freedom, few know that it was one of the world’s 10 largest economies at the turn of the 20th century. In fact, for much of its early history Argentina’s economy was at parity with that of the United States.

But these two countries pursued very different economic policies that were rooted in their respective colonial histories: The United States made property ownership easy as it expanded westward, while Argentina created a new class of aristocrats who owned the vast majority of its frontier land. Both countries benefited from exporting agricultural commodities to Europe in the 19th century, but Argentina’s risk-averse landowner-oligarchs resisted industrialization. Global food prices collapsed in the mid-1920s as production in Europe bounced back after the end of World War I, and this shock hit Argentina’s undiversified agricultural economy especially hard. The global Great Depression a few years later further destabilized the economy, which in turn led to a series of political shocks: A dictatorship, a coup, a military junta, and a disastrous war with Great Britain.

In summary, Argentina represents how the United States might look in an alternate universe in which los yanquis had made similar political and economic decisions.

Argentina’s currency was quite stable during my time living there. In those days the value of the peso was pegged to that of the dollar. This trick brought Argentina’s chronic hyperinflation under control, from an annual rate of 3,000% in 1989 down to 3.4% in 1994, and restored public confidence in the local currency.

But problems soon emerged. The strong dollar-pegged peso reduced the competitiveness of Argentine exports, so provincial and city governments ran fiscal deficits in order to “stimulate” their economies and fight stubbornly-high unemployment. In those days I would have a number of conversations along these lines with people who I met in various cities:

Yo: “So what kind of work do you do?”

Fulano: “I work for la municipalidad (the city).”

Yo: (looking at my watch, Tuesday at 11:00 AM) “Hmm…”

Fulano: (smiling) “They only call me when they need me, che!”

The cash-strapped cities and provinces needed frequent injections of cash from the federal government, which in turn was unable to finance this spending by simply printing more pesos as it had done in the past. So all of these state and local deficits turned into federal debt, denominated in dollars and sold on the international credit markets.

Things started to get really bad for Argentina in the late 1990s. The economy was struggling under an increasing debt load, and the strong US dollar forced Argentina to maintain a strong peso while the currencies of major trading partners and competitors in Brazil, Europe, and Asia were relatively weak due to regional crises. There was no way for Argentina to remain competitive in global trade without devaluing the peso, which would require breaking its peg to the dollar, and this in turn would precipitate a default on the country’s dollar-denominated foreign debts.

The Argentine people are quite familiar with monetary crises. In 2001, after being mired in a nearly-three-year-old recession and fearing currency devaluation, they began withdrawing their pesos from banks, converting them to dollars at the official 1:1 exchange rate, and stashing the dollars in their mattresses (or outside the country if the mattress was too small). The government responded with a set of capital controls that became known colloquially as the corralito (literally “playpen”), prohibiting cash withdrawals from dollar-denominated bank accounts unless the deposits were converted to pesos, and limiting withdrawals from peso-denominated accounts to 300 pesos per week.

The public outcry in response to the corralito was fierce, culminating in the resignations of Economy Minister Domingo Cavallo and President Fernando de la Rúa. Their successors expanded the corralito in short order by forcibly converting all dollar-denominated deposits (and debts) to pesos and devaluing the peso from the exchange rate of 1 peso per dollar to 1.4 pesos per dollar. Soon afterward they let the peso float, and several weeks later it stabilized at a rate of about 4 pesos per dollar.

I visited Argentina again a few years after the 2001 devaluation. This time I was in graduate school and we were studying local companies as part of a course in global strategy. The economy had recovered substantially in spite of the government being shut out of global credit markets due to its 2001 default. Most of the companies we visited were bullish on the country’s future.

The highlight of the trip was a dinner that I had with a good friend and former missionary companion. We shared good memories from our time working together and talked about our young families and career plans. He said that one of the lasting legacies of the devaluation was the “brain drain” that occurred as many young, talented professionals left Argentina for other countries. He estimated that a third of the people his age in his neighborhood had emigrated since the crisis.

The following afternoon my classmates and I had a meeting with none other than Domingo Cavallo, the man who had implemented dollar convertibility in 1991 to break Argentina’s chronic inflation and, ironically, had been responsible for the corralito a decade later. I expected that Cavallo would give us a history lesson, perhaps to explain what he did in each case and why. The lecture turned out to be much more useful than I had imagined.

Cavallo spoke for two hours on the perils of inflation, its early warning signs, and how we should manage our personal finances and businesses in an inflationary economy. He began by explaining that inflation is highly asymmetric: It’s easy to create (just fire up the printing presses), but it’s very hard to eradicate. Once people begin to expect inflation, the problem becomes particularly acute: People demand higher wages, which in turn drives up prices, which signals to more people that they should expect future inflation and the cycle repeats ad infinitum.

However, Cavallo said that while this cycle can produce painful inflation on the order of 10% to 20% per year, it cannot produce on its own the type of “hyperinflation” on the order of 100% to 1,000% per year that had plagued Argentina in the past. Hyperinflation is possible only when consumers lose confidence in their local currency and begin to save and trade in foreign currency. Prices are actually quite stable in foreign-currency terms during episodes of hyperinflation. The massive price increases in local-currency terms occur because of devaluation: No one trusts the local currency enough to risk holding it at practically any price.

So, Cavallo warned us, when you see people start dealing in foreign currency, it’s time to buckle up.

In recent weeks Argentina has returned to the international stage, this time as its official exchange rate has fallen to 8 pesos per dollar (and the unofficial black-market rate to 12 pesos per dollar). The reason? The country is running out of the foreign exchange reserves that it needs to sell on the open market in order to buy enough of its own currency to prevent its price from collapsing. Reserves are low because few foreigners want to invest in a country that has repeatedly expropriated and nationalized private businesses, and Argentina’s steep tariffs, intended to keep domestic prices and unemployment low, further discourage foreign trade. When a central bank is creating new money faster than its economy is creating real value, there will always be downward pressure on its currency.

Argentina and the United States are similar countries in many ways, especially considering their parallel economic development leading up to the early 20th century. Their development diverged due to differences in their respective political systems and economic policies. Argentina remains a constant reminder of what the United States would look like under a similar political and economic regime characterized by political cronyism, state-sanctioned monopolies, powerful unions, lawless seizure of private assets, fierce protectionism, endless fiscal stimuli, loose monetary policy, unrestrained borrowing, poor productivity, and high unemployment.

Perhaps the most significant difference between the two countries today is that the United States is allowed to pay its debts in its own currency.

At least for now.

 

 

Why default is inevitable


franklin-delano-roosevelt-laughing-photo-1[1]We seem to be living in perilous times. Parts of the United States government are shut down because the House and Senate can’t agree to either fund or defund Obamacare, whose rollout this month has been an unmitigated disaster. This impasse now threatens the looming deadline to raise the debt limit before the Treasury supposedly runs out of money sometime in the next 48 hours.

Many people around the world fear that the United States might default on its debt if Congress fails to raise the debt limit. The consequences of default would be far-reaching and hard to predict because so much of the world economy is based on the US dollar and the “full faith and credit” of the US government to honor its debts.

In recent days several friends have asked what I think about this situation. My position is that default is inevitable, but I’d be very surprised if it happened this week. Here’s why:

Most of the Federal budget, nearly 60% of it, is taken up by entitlement programs. People who qualify for these programs, like Social Security and Medicare, are legally entitled to their benefits. The government cannot reduce these benefits without changing the law. About 30% of Federal spending is discretionary, meaning that the government can choose to cut it without violating any laws. And interest on the national debt accounts for less than 7% of the Federal budget.

The problem is that current Federal spending is about 24% of GDP (gross domestic product, i.e. the total value of all goods and services produced by our economy each year), whereas the Federal revenue (taxes) are only about 15% of GDP. So for every dollar the Government collects in taxes, it’s borrowing and spending an additional 60 cents.

That borrowing represents 38% of the Federal government’s funding, which is greater than the 30% of the budget taken up by discretionary spending. So even if we slash discretionary spending to zero, we can’t fund both our current entitlement spending and our interest payments without additional borrowing.

This means we’re in pretty bad shape today, but it’s only the beginning of the story.

In the days before Obamacare became law, the non-partisan National Taxpayers Union issued a report on the unfunded liabilities of then-current entitlement programs. In other words, understanding that we’re going to have greater numbers of older, sicker people in the future and fewer workers funding Medicare and Social Security, how much extra money would the Federal government need to have on hand today in order to deliver all the benefits it has promised under these programs for the next 75 years without raising taxes?

The answer: About $46 trillion. That’s about three times the current United States GDP. And that’s before factoring in the Obamacare subsidies that will theoretically help defray the cost of health insurance for lower-income Americans once the websites stop crashing.

The National Center for Policy Analysis published a similar study 10 years ago, before the Medicare prescription drug benefit (Part D) became law, and the numbers were similar. Authors Jagadeesh Gokhale and Kent Smetters predicted that the fiscal funding gap would be around $50 trillion including Medicare Part D. They calculate that the gap could be closed by a permanent increase in payroll taxes by about 15%, or by raising income tax revenues permanently by about 60%.

This entitlement funding gap is frighteningly large, much larger than the value of the national debt that causes so much political hand-wringing. Each year we’re spending more than 8 times as much on entitlement programs as we are on those interest payments that we’re supposedly going to miss if we don’t raise the debt limit. But we don’t have a meaningful national discussion about entitlement reform.

For all this talk about defaulting on our debt obligations, why aren’t we talking instead about the inevitable default on our unsustainable entitlement spending?

Part of the problem is that this conversation necessarily involves math, and a recent study suggests that less than 10% of US adults are likely proficient enough to understand a question of this complexity.

Beyond ignorance, the obvious answer as to why we don’t talk about entitlement reform is that these programs are untouchable. Any attempt to alter them in a meaningful way is political suicide. This feature was admittedly designed into the programs in the form of payroll taxes. As FDR said:

We put those pay roll contributions there so as to give the contributors a legal, moral, and political right to collect their pensions and their unemployment benefits. With those taxes in there, no damn politician can ever scrap my social security program. Those taxes aren’t a matter of economics, they’re straight politics.

Well done, then. 78 years later and still working as designed.

vice-president-lyndon-b-johnson-everett[1]Congress and LBJ amended FDR’s Social Security Act in 1965, adding two amendments that created Medicare and Medicaid.

Medicare, like FDR’s Social Security, is funded through payroll taxes.

Incidentally, some policymakers wanted health insurance as part of the original Social Security Act of 1935, though they feared that including health insurance would jeopardize the entire bill, so it was left out.

In piggybacking Medicare on top of Social Security and using payroll taxes to fund this new entitlement, LBJ guaranteed that Medicare would be just as politically sacrosanct as its forerunner. And 48 years later, Medicare is still as untouchable as ever. It has been amended several times but has always grown in size and scope.

In spite of their shared political knack for creating these fiscal black holes, FDR and LBJ are not the worst villains in this story. That distinction belongs to LBJ’s successor, a man whose infamy seems to have denied him the privilege of a three-letter sobriquet.

nixon-victory-sign[1]Richard Milhous Nixon did not create Social Security or Medicare, but the magnificently reckless monetary policy maneuvers that he undertook to boost his chances of re-election effectively forced central banks around the world to hold the US dollar as their primary reserve asset, making them complicit in anesthetizing the United States to the fiscal pressures of these two entitlement monsters plus profligate discretionary spending, thereby lulling us into a false belief that our government can be exceedingly generous to all without anyone ever having to pick up the check.

When Nixon unilaterally defaulted on our obligations under the Bretton Woods system back in August of 1971, he set us up to run chronic fiscal deficits with no real consequences. And but for a period of fiscal sanity in the late 1990s, we have done just that:

So we find ourselves in the present crisis, stuck between a fiscal rock and a monetary hard place, with massive unfunded entitlement liabilities on one hand and an endlessly-growing national debt on the other. There is no way out but through some kind of default:

  • We could default on our bondholders. One of my professors once said that when sovereign debt reaches unsustainable levels, default is inevitable. And a nation can either default quickly through a restructuring, which is a euphemism for “we aren’t going to pay you back” (see Argentina), or do it slowly through deliberate inflation, i.e. paying off an unsustainable debt with cheaper dollars in the future (arguably the Fed’s present and future course of action). A restructuring of US debt is unfathomable at present as we can (at least for now) easily afford our interest payments. And in spite of what seems like partisan gridlock, the Treasury still has a few tricks up its sleeve before it throws in the towel.
  • We could default on the old, the poor, and the sick. Restructuring Social Security and Medicare seems like a good idea given more than $46 trillion in unfunded liabilities tied to these programs, in addition to the still-unknown cost of the Obamacare subsidies. But by design any of these reforms will entail a steep political cost, and the proverbial chickens likely won’t come home to roost until today’s politicians have retired or died. It may take a threat of (or even an actual) default on our debt in order to compel the government to undertake meaningful entitlement reform. Some low-risk options for restructuring these programs include significant phased increases in the age of eligibility and financial incentives for families to complete end-of-life planning (advanced directive, living will, medical power of attorney, etc.)
  • We could default on taxpayers. Granted, there is no prohibition against raising taxes in the future, but a social contract most definitely exists between taxpayers and the government wherein the government promises fair services in exchange for a fair tax rate. We will likely need some type of tax increase in order to close the entitlement funding gap, and any tax increase will be politically risky. We could theoretically minimize political fallout by increasing taxes on only the so-called “rich”, though this type of default cuts both ways in the tax game and is therefore of limited utility in practice.

We’re in a tough spot, and there is no easy way out. We’re ultimately going to have to restructure both our tax code and our entitlement programs if we want to regain the kind of fiscal and monetary discipline that will allow future security and prosperity. These entitlement and tax defaults will be painful, but they will be much more manageable than a disorderly default on our debt.

The most immediate question, then: Who is in a position to lead us through this transition?

Confessions of an unlikely homeschooler


stack-of-books

My wife and I teach our three children at home. This is a rather generous assertion on my part, as she does most of the teaching. My primary job is to keep the whole endeavor funded. Beyond that I also teach the sciences, higher math, wood shop, physical education, and occasionally music.

Society calls what we do “homeschooling”, a semantically-ambiguous term that generally refers to families who opt out of both public and private schools and take full personal responsibility for the education of their children. But our reasons for doing so are different from those of the millions of other families who follow a similar path. In fact, there are probably as many reasons to homeschool as there are homeschooling families.

In conversations with friends I regularly find myself answering the question, “What made you decide to homeschool?”

My wife likes to say that we started teaching our children as soon as they were born. We helped our kids learn to roll over, sit up, crawl, stand, walk, and run. We taught them to speak, read, and write. We were among a small minority who did not send our kids away to preschool. Instead, we picked up a homeschool kindergarten math curriculum from Saxon.

We soon found ourselves facing a dilemma: Our oldest daughter was a year ahead in math and several years ahead in her reading and writing by the time she was supposed to be starting kindergarten, not because she is a savant or child prodigy, rather because we taught her those things at an early age when she was ready for them. Should we put her with peers her same age so she could “fit in” socially but be bored academically, or should we put her with peers of similar ability and have her be a social misfit among older kids?

Neither of these was an acceptable option, so we rejected them both. We opted to in-source the teaching and relied on church, music, sports, and other networks for social connections to her peer group.

This period of our lives coincided with my time in graduate school, and I enjoyed applying what I was learning there to the questions we were facing at home regarding education. I understood that our daughter was a statistical outlier among her peer group at the time in terms of her math and reading skills, but those aren’t the only two types of intelligence, and in all likelihood she was below the mean in some other areas. By the law of large numbers, the education system in general (and the classroom setting in particular) caters strictly to the mean. So how could an undifferentiated classroom experience possibly be optimal for anyone who is an outlier, positive or negative, in any dimension?

The answer, of course, is that such an experience is not optimal for any student. And that fact is obvious when you consider the way the system is constructed. Any rational system is designed to economize around its most scarce resource. For example, certain parts of our healthcare system are characterized by big, expensive buildings filled with expensive people and expensive equipment. Why do we build these expensive hospitals and make the patients show up and wait around? Because the doctors and the equipment they use are the most scarce resources in the system. We optimize the system to make sure that those resources are fully utilized, and we waste the more abundant and less-valuable resources, like the patients’ time.

Our education system is similar. We build big expensive buildings and fill them with expensive (unionized) teachers. We make the students come to the buildings and do a lot of sitting around, standing in line, waiting their turn, etc. Why? Because we view the teachers as the most scarce resources. We optimize the entire system around the most efficient use of their time. Individual student outcomes are of vanishingly small importance.

It’s the collective outcomes that matter most to the education system. Public schools in the United States are modeled after the system established in 18th-century Prussia, whose purpose was to instill in its citizens the doctrine of social obedience to the King and to produce a steady supply of qualified labor for the bureaucracy, the military, and emerging industry.

In other words, the education system was not designed to produce excellence (positive outliers), rather a predictable mean and narrow standard deviation. Why? Because people who think the same are easier to govern, easier to lead into war, and easier to manage at work. To these age-old insights I will add a modern one: It’s also much easier to market to a population that has been systematized into thinking the same way.

I have a habit of waxing verbose on these points whenever I am asked why we homeschool. After one such conversation a few weeks ago, a good friend sent me the following illustration of a TED talk given by Sir Kenneth Robinson. It’s worth watching because it conveys these same ideas graphically and much more succinctly than I do:

 

So I have come to understand that our fundamental reason for homeschooling our children is because we see unique greatness in each of them that we don’t want the system to destroy in its endless quest to reduce variance. We want our children to think in ways that are not strictly correlated with how everyone else thinks.

And in the present environment, the stakes are so high that we cannot afford to outsource this job to anyone else.

That’s why we homeschool.

The 1%: They’re at it again…


cm-23138-050624abe3a9e6[1]

One percent of Americans now earn a greater share of income than at any time since the 1920s, according to this article posted today. The top 1% of income earners, those who earned more than $394,000 last year, accounted for more than 19% of all income reported to the IRS, while the top 10%, or those who earned more than $114,000, accounted for more than 48% of all income.

I got upset when I read this article. I don’t think it’s fair. And I think something needs to change.

Specifically, I don’t think it’s fair to any of us that the standards of journalism have sunk so low that an AP reporter can get away with publishing a handful of numbers buried in a steaming pile of opinion and pass it off as “news”.

Nassim Taleb rants in his excellent book The Black Swan about how he does not read the news. He relies instead on prices to communicate what’s going on in the world, because prices are purely objective. He ridicules outlets like Bloomberg that try to explain price movements with a narrative that connects them to other events. No doubt you’ve seen the following occur: The market opens lower, and Bloomberg runs a headline “Stocks down on interest rate fears.” And then by lunchtime the market has rallied for some random reason, and Bloomberg changes the headline to “Stocks up on interest rate optimism.”

Of course investors don’t change their mind on interest rates between breakfast and lunch, unless Bernanke happened to make a statement to the press in the meantime. And Bloomberg certainly can’t tap into the thoughts of millions of market participants that quickly with any accuracy. These headlines are absolute rubbish, but we eat them up and come back for more. Taleb attributes this irrational behavior to what he calls the “narrative fallacy”, or our inability to look at facts without trying to come up with a story that ties them together. Everyone seems to want to find a narrative, and every narrative, no matter how far-fetched or even ridiculous it might be, will seem plausible to at least one person.

Let’s apply this lens to the AP article on income inequality. But before we do, I need to rant a bit myself:

I am frustrated with the lack of semantic precision found in most news articles about the economy, particularly the confounding of two concepts that are related but very different: Wealth and income. Too many reporters use these words interchangeably. This lack of discipline (together with lousy education, more on that later) is responsible for spreading a plague of economic and political confusion among the general public.

Wealth is a “stock” variable that describes what you have accumulated. It is expressed in units of money, like dollars. It is your net worth, i.e. the sum of your assets less the sum of your liabilities. It’s the value of your bank accounts, your property, your investments, etc. minus the value of all your debts. It’s the equity on your personal balance sheet. Ideally you want this number to be greater than zero.

Income is a “flow” variable that describes the rate at which you acquire wealth. It’s expressed in units of money per unit of time, like $20/hour or $114,000 per year. Generally speaking, you also want this number to be greater than zero.

But describing personal finance using these two variables is problematic for the popular discourse, because we find it convenient to distill reality, in all of its rich detail, down to a series of false dichotomies in order to simplify the talking points. For example, the terms rich and poor.

What does it mean to be rich? Does it mean high wealth, high income, or both? The answer to this question may seem irrelevant to many Americans considering our labor force participation rate of just 63.3%. There are an awful lot of people who would be happy if they were just a little better off than the status quo. But rich people have high wealth. There isn’t such an easy term for high-income people, so lazy reporters and politicians often call them rich as well. And while high income is correlated with high wealth, it doesn’t necessarily cause high wealth because one can always spend more than he earns.

Even with unambiguous definitions of wealth and income and our best efforts to avoid the narrative fallacy, income statistics can be hard to understand for a number of reasons. So lets go back and walk through that AP article.

We get off to an inauspicious start with the first sentence:

The gulf between the richest 1 percent and the rest of America is the widest it’s been since the Roaring ’20s.

The erroneous use of the word “richest” to describe the concept of income rather than wealth allows the author to foist another subtle lie upon us: That the top income-earners are an exclusive fraternity whose membership has not changed since the Roaring ’20s. You know, a cabal of old rich guys like Charles Montgomery Burns who’ve been waiting for this “Excellent!” opportunity to return to their pre-Depression glory days.

In fact, the composition of the 1% changes dramatically from year to year. In addition to the perennials like Bill Gates, Warren Buffet, and Mr. Burns, the 1% club includes transitory members like the salesperson who had a great year after spending the previous two living on her modest base salary while she built customer relationships in a new territory, and the farmer who had a bumper crop after getting wiped out by drought the previous year, and the entrepreneur who sold a business he had been running on a shoestring budget for several years without taking a salary himself.

These people and many others like them have a lot of lousy-to-mediocre years and a handful of great ones because they take big risks, and sometimes those bets pay off well.

The author continues:

In 2012, the incomes of the top 1 percent rose nearly 20 percent compared with a 1 percent increase for the remaining 99 percent.

The richest Americans were hit hard by the financial crisis. Their incomes fell more than 36 percent in the Great Recession of 2007-09 as stock prices plummeted. Incomes for the bottom 99 percent fell just 11.6 percent, according to the analysis.

But since the recession officially ended in June 2009, the top 1 percent have enjoyed the benefits of rising corporate profits and stock prices: 95 percent of the income gains reported since 2009 have gone to the top 1 percent.

That compares with a 45 percent share for the top 1 percent in the economic expansion of the 1990s and a 65 percent share from the expansion that followed the 2001 recession.

Again, the use of subtle language like “the incomes of the top 1 percent” implies that each member of last year’s top 1 percent got a 20% raise this year. Use of the singular, “the income of the top 1 percent,” would be factually and grammatically correct. We have no way of knowing the average increase for any individuals year-over-year.

The author repeats the same pattern of flawed reasoning and subtle linguistic hints throughout this section, attempting to construct a fictitious narrative of a typical 1%-er during and after the financial crisis using aggregate data, as if, say, the bearish money managers who did well during the crisis were the same people who profited from the market’s recovery in mid-to-late 2009. And to finish this exercise in fallacious reasoning the author compares these imaginary individuals with their younger imaginary selves from 2001 and even the 1990s, apparently to show that if there was such a person who was fortunate enough to have been a member of the 1% club for 23 straight years, he would have done exceptionally well this year by comparison with his measly-but-still-1% performances in past bull markets.

The author goes on to provide a couple of useful numbers that I already included in my introduction, and then comes the following important disclaimer:

The income figures include wages, pension payments, dividends and capital gains from the sale of stocks and other assets. They do not include so-called transfer payments from government programs such as unemployment benefits and Social Security.

In other words, they’ve assumed that the 37% of Americans who have given up looking for work have no income at all, even though we are supporting them with myriad entitlement programs. And these figures also include the 13% of Americans over the age of 65 who are living on Social Security, but we don’t count those payments as income, either. So fully 50% of the population, according to this study, effectively has zero income!

Might that little fact explain why this analysis found such a small increase in the income of this population since 2009? They’re not working!

And now, I present to you the narrative fallacy in all its glory:

The gap between rich and poor narrowed after World War II as unions negotiated better pay and benefits and as the government enacted a minimum wage and other policies to help the poor and middle class.

The top 1 percent’s share of income bottomed out at 7.7 percent in 1973 and has risen steadily since the early 1980s, according to the analysis.

Economists point to several reasons for widening income inequality. In some industries, U.S. workers now compete with low-wage labor in China and other developing countries. Clerical and call-center jobs have been outsourced to countries such as India and the Philippines.

Increasingly, technology is replacing workers in performing routine tasks. And union power has dwindled. The percentage of American workers represented by unions has dropped from 23.3 percent in 1983 to 12.5 percent last year, according to the Labor Department.

The changes have reduced costs for many employers. That is one reason corporate profits hit a record this year as a share of U.S. economic output, even though economic growth is sluggish and unemployment remains at a high 7.2 percent.

Every one of these statements is an opinion with enough facts sprinkled in to make the author sound credible. But there are numerous other plausible narratives that could explain how we got to where we are. Here’s my version:

The distribution of incomes narrowed after World War II as millions of employable men returned home to cities rather than their family farms, swelling the ranks of labor unions as they took private-sector manufacturing jobs that paid better than military service or manual labor on the farm because they created more economic value than these alternative uses for the same labor.

The trend toward urbanization and industrialization continued until the early 1970s, when increased manufacturing automation, information technology, liberalization of trade, and floating currencies forced manufacturing workers to begin competing against machines and cheap offshore labor.

Meanwhile, easy monetary policy began to inflate the prices of financial assets held disproportionately by wealthier Americans and eroded the purchasing power of the working classes due to a politically-rigged Consumer Price Index, which decline in household purchasing power coincided with the rise of feminism to encourage more women to join the workforce.

This increased the supply of labor but not the demand for the same, which further depressed wage growth, which made the two-income household a modern necessity and deprived children of many of the emotional benefits enjoyed by the previous generation, which in turn coincided with the dumbing-down of public education and ultimately condemned a huge segment of the rising generation to chronic under-achievement.

Which is how we arrived where we are today: Greater variance in household incomes than at any time recent memory, and no easy way to fix it.

Especially if we allow such lousy reporting to frame the problem inside a politically-convenient narrative that hints at “solutions”.

What to do with Xbox?


Regarding my last post about carving up Microsoft, here’s a Twitter conversation with a good friend that merits some elaboration:

I believe that Xbox is not immune from the please-your-best-customer plague that afflicts the rest of Microsoft and kills off disruptive innovation. Here’s why:

One of the coolest things that happened during my tenure at Microsoft was the development and launch of the Kinect sensor for Xbox. Using Kinect for the first time — playing a modded version of Forza 3 at the house of a friend who was on the dev team long before the product had a name — was a frighteningly cool experience. It was instantly clear to me that the technology had immense potential to broaden the reach of gaming beyond the traditional target market that skews young, male, and geeky.

My kids were even invited to be Kinect testers before launch. One afternoon we worked our way through a labyrinthine warehouse near the old company store until we found a mocked-up living room. The kids got some snacks and played a pre-release build of Kinect Adventures. They were absolutely mesmerized. The testers were gathering data on the performance of the sensor, and about a week later they called me up and asked me to bring my then-3-year-old back for a second round. “We just fixed a bug that affects only short skinny kids with long hair, and she’s our ideal test case.”

I had high expectations for the product based on these experiences, and the Kinect launch just blew them away. The product was a runaway success, a proud moment for Microsoft when we really needed a hit.

A few months after the launch I got a call from an executive on the Kinect team who wanted to meet to discuss the healthcare market. I paired up with a colleague and we headed over to the side of campus where all the cool kids work.

At the meeting we learned that their phone had started ringing off the hook as soon as the product launched, customers and partners were calling with all sorts of ideas about how to use Kinect in different verticals. Not being a group particularly concerned about customer intimacy, Xbox just told them all “No.” The phone calls soon became frequent enough that the team hired a guy just to answer the phone and tell everyone “No,” which he dutifully did. But being a smart guy, he started keeping a record of who was calling, what industry they were from, and what their idea was before he gave them the obligatory “No.”

After a few months the smart guy found this job pretty depressing. So he created a report for his bosses that showed the distribution of all the people he had said “No” to by industry, to illustrate that maybe they should be paying attention to what these customers were saying. And healthcare was far and away the top industry in his dataset, having 3x more inbound calls than any other vertical.

So the Kinect team had called a meeting with me and my colleague to learn more about what they could do to address the healthcare market.

We took to the whiteboard and started with the forces that are acting on the healthcare system: An aging population, the obesity epidemic, increasing incidence of chronic diseases like diabetes and congestive heart failure, heath reform, business model changes due to the rise of accountable care organizations, population health management, etc.

“We think you should consider that the management of chronic diseases in the elderly is a market that is worth in excess of $100B per year, and it will likely require technology similar to Kinect in order to engage with these patients at home, where caring for them will be least expensive,” we said.

“So… make a game for old sick people?”

“Not a game per se, more like a set of apps that use the Xbox + Kinect hardware to allow healthcare providers and payers to manage expensive chronic conditions at a lower cost.”

“Guys, we’re Xbox. We don’t do old people.”

“Why not?”

“It would ruin our brand.”

“What do you mean?”

“Well, our customers are hard-core gamers. You know, young, male…”

“Wouldn’t a set of chronic disease management apps for a different market segment enable you to increase the install base of your platform rather dramatically and then monetize other content, like TV and movies?”

“I think we’re going to make a fitness game for kids who are hard-core gamers.”

“Why is that?”

“Because the kids who play games the most probably need some exercise. And they already know our brand, so we think they’ll buy our fitness game.”

“So your games made these kids fat, and now you’re going to make another game to help them get in shape?”

“Yeah. Pretty cool, isn’t it?”

Well, no, not really. Taking a magnificent technology like Kinect that has the ability to change the world in seriously meaningful ways and offering it only to your best customers within the context of how you make money today is not cool. It’s picking the low-hanging fruit.

My colleague and I left the meeting disappointed that we couldn’t convince these guys to elevate their vision and accomplish anything more than attempting to undo some of the damage that their product has inflicted on a generation of sedentary kids. We were frustrated that they called and asked for our advice when they already seemed to have made up their minds about what they were going to do.

We soon found out why. As we followed up with the Kinect guys a few weeks later, we learned that the exec who led the discussion had been given a nice promotion to lead a new game studio focused on — you guessed it — fitness games for fat kids.

I’m sure it has been a good career move for him, a low-risk way to get to the next level by offering something novel to his best customers. I just regret that he’ll never get measured against the massive value that he could have created if he had been more willing to pursue disruptive rather than sustaining innovation.

So I believe that Xbox has tremendous potential. They have some great technology and some fantastically talented people. They could reinvent the way we experience media in the living room by combining TV, movies, music, gaming, and web content in innovative ways. They could leverage other consumer technology assets in Microsoft’s portfolio and do some great things. And they could go far beyond that if they are willing to think differently about who their best customers might be in the future.

But I think that they are just as risk-averse as the rest of Microsoft when it comes to business model innovation. Odds are that young, male, flabby geeks will continue to occupy the attention of the people running the business in the foreseeable future,. And if Xbox can’t focus beyond their current customers, they would be better off on their own than as part of Microsoft’s broader consumer strategy.

Follow

Get every new post delivered to your Inbox.

Join 621 other followers