Home Book Notes Members Contact About
Scott Vejdani
Upstream: The Quest to Solve Problems Before They Happen - by Dan Heath

Upstream: The Quest to Solve Problems Before They Happen - by Dan Heath

Date read: 2022-09-03
How strongly I recommend it: 8/10
(See my list of 150+ books, for more.)

Go to the Amazon page for details and reviews.

Very process improvement focused. The author provides great examples of businesses and organizations using upstream thinking to eliminate downstream problems and provides techninques to think ahead. Great for process improvements and second & third order systems thinking.


Contents:

  1. UPSTREAM VS. DOWNSTREAM
  2. THREE BARRIERS TO UPSTREAM THINKING
  3. UNITE THE RIGHT PEOPLE
  4. CHANGING THE SYSTEM
  5. FINDING A LEVERAGE POINT
  6. EARLY WARNING DETECTORS
  7. MEASURING SUCCESS

show more ▼


My Notes

UPSTREAM VS. DOWNSTREAM
Downstream actions react to problems once they’ve occurred. Upstream efforts aim to prevent those problems from happening.

So often in life, we get stuck in a cycle of response. We put out fires. We deal with emergencies. We handle one problem after another, but we never get around to fixing the systems that caused the problems.

That’s one reason why we tend to favor reaction: Because it’s more tangible. Downstream work is easier to see. Easier to measure. There is a maddening ambiguity about upstream efforts.

In this book, I’m defining upstream efforts as those intended to prevent problems before they happen or, alternatively, to systematically reduce the harm caused by those problems.

You don’t head Upstream, as in a specific destination. You head upstream, as in a direction. Swim lessons are further upstream than life preservers. And there’s always a way to push further upstream—at the cost of more complexity.

Downstream efforts are narrow and fast and tangible. Upstream efforts are broader, slower, and hazier—but when they work, they really work. They can accomplish massive and long-lasting good.



THREE BARRIERS TO UPSTREAM THINKING
  1. “Problem blindness”—the belief that negative outcomes are natural or inevitable. Out of our control. When we’re blind to a problem, we treat it like the weather. We may know it’s bad, but ultimately, we just shrug our shoulders. What am I supposed to do about it? It’s the weather.

    When we don’t see a problem, we can’t solve it. And that blindness can create passivity even in the face of enormous harm.

    That’s just how it is—so no one questions it. That’s problem blindness.

    To succeed upstream, leaders must: detect problems early, target leverage points in complex systems, find reliable ways to measure success, pioneer new ways of working together, and embed their successes into systems to give them permanence.

    The escape from problem blindness begins with the shock of awareness that you’ve come to treat the abnormal as normal.

    The seed of improvement is dissatisfaction.

    A search for community: Do other people feel this way?

    Something remarkable often happens next: People voluntarily hold themselves responsible for fixing problems they did not create.

    What’s odd about upstream work is that, despite the enormous stakes, it’s often optional. With downstream activity—the rescues and responses and reactions—the work is demanded of us. A doctor can’t opt out of a heart surgery; a day care worker can’t opt out of a diaper change. By contrast, upstream work is chosen, not demanded.

  2. Lack of ownership: A corollary of that insight is that if the work is not chosen by someone, the underlying problem won’t get solved.

    A lack of ownership, though, means that the parties who are capable of addressing a problem are saying, That’s not mine to fix.

    The question they asked themselves was not: Can’t someone fix this problem? It was: Can we fix this problem? They volunteered to take ownership.

    Then she said, “I’d like each of you to tell the story of this situation as though you’re the only one in the world responsible for where we are.”

    Asking those questions might help us overcome indifference and complacency and see what’s possible.

    Researchers have found that when people experience scarcity—of money or time or mental bandwidth—the harm is not that the big problems crowd out the little ones. The harm is that the little ones crowd out the big ones.

  3. “Tunneling”: When people are juggling a lot of problems, they give up trying to solve them all. They adopt tunnel vision. There’s no long-term planning; there’s no strategic prioritization of issues. And that’s why tunneling is the third barrier to upstream thinking—because it confines us to short-term, reactive thinking. In the tunnel, there’s only forward.

    The need for heroism is usually evidence of systems failure.

    How do you escape the tunnel? You need slack. Slack, in this context, means a reserve of time or resources that can be spent on problem solving.


UNITE THE RIGHT PEOPLE
How will you unite the right people? Start with Sigfúsdóttir’s insight: Each one of them gets a role.

To succeed in upstream efforts, you need to surround the problem. Meaning you need to attract people who can address all the key dimensions of the issue. In Iceland, the campaign leaders engaged the teenagers and almost all the major influences on them: parents, teachers, coaches, and others. Each one had something critical to contribute.

The lesson of the high-risk team’s success seems to be:
  1. Surround the problem with the right people

  2. Give them early notice of that problem

  3. Align their efforts toward preventing specific instances of that problem. To clarify that last point, this was not a group that was organized to discuss “policy issues around domestic violence.” This was a group assembled to stop particular women from being killed.

When you design the system, you should be thinking: How will this data be used by teachers to improve their classrooms? How will this data be used by doctors and nurses to improve patient care? How can the local community use the information?

McCannon believes that groups do their best work when they are given a clear, compelling aim and a useful, real-time stream of data to measure their progress, and then… left alone.



CHANGING THE SYSTEM
Upstream work is about reducing the probability that problems will happen, and for that reason, the work must culminate in systems change. Because systems are the source of those probabilities. To change the system is to change the rules that govern us or the culture that influences us.

So in the pursuit of systems change, where do you start? What do you do in, say, the first month of what might be a decades-long effort? You look for a point of leverage.

Upstream leaders should be wary of common sense, which can be a poor substitute for evidence.



FINDING A LEVERAGE POINT
While every domain of upstream work will have its own unique equation—and thus its own leverage points—the strategy used by the Crime Lab’s leaders to find those leverage points is closer to universal: Immerse yourself in the problem.

When you get close to a problem, what exactly are you looking for? How do you know a promising lever and fulcrum when you spot it? In searching for a viable leverage point, your first pass might be to consider, as the leaders in Iceland did, the risk and protective factors for the problem you’re trying to prevent. For teenage alcohol abuse, a protective factor is being involved in formal sports—it eats up a teen’s time and provides a source of natural highs.

As an alternative to the focus on risk and protective factors, consider whether your leverage point might be a specific subpopulation of people. In many domains, a very small set of people can create an inordinate burden on the system.

The reason to house the homeless or prevent disease or feed the hungry is not because of the financial returns but because of the moral returns. Let’s not sabotage upstream efforts by subjecting them to a test we never impose on downstream interventions.



EARLY WARNING DETECTORS
Some early-warning systems work wonders: They can keep elevators from failing and customers from churning. Other times, they may cause more harm than benefit, as in the thyroid cancer “epidemic” in South Korea. How do we distinguish between the two? One key factor is the prevalence of false positives: warnings that incorrectly signal trouble.

As we design early-warning systems, we should keep these questions in mind: Our comfort with that level of false positives may, in turn, hinge on the relative cost of handling false positives versus the possibility of missing a real problem.



MEASURING SUCCESS
They used what Andy Grove, the former CEO of Intel, called “paired measures.” Grove pointed out that if you use a quantity-based measure, quality will often suffer. So if you pay your janitorial crew by the number of square feet cleaned, and you assess your data entry team based on documents processed, you’ve given them an incentive to clean poorly and ignore errors, respectively. Grove made sure to balance quantity measures with quality measures. The quality of cleaning had to be spot-checked by a manager; the number of data-entry errors had to be assessed and logged.

Here are four questions to include in your pre-gaming:
  1. The “rising tides” test: Imagine that we succeed on our short-term measures. What else might explain that success, other than our own efforts, and are we tracking those factors?

  2. The misalignment test: Imagine that we’ll eventually learn that our short-term measures do not reliably predict success on our ultimate mission. What would allow us to sniff out that misalignment as early as possible, and what alternate short-term measures might provide potential replacements?

  3. The lazy bureaucrat test: If someone wanted to succeed on these measures with the least effort possible, what would they do?

  4. The defiling-the-mission test: Imagine that years from now, we have succeeded brilliantly according to our short-term measures, yet we have actually undermined our long-term mission. What happened?

  5. The unintended consequences test: What if we succeed at our mission—not just the short-term measures but the mission itself—yet cause negative unintended consequences that outweigh the value of our work? What should we be paying attention to that’s offstage from our work?


In planning upstream interventions, we’ve got to look outside the lines of our own work. Zoom out and pan from side to side. Are we intervening at the right level of the system? And what are the second-order effects of our efforts: If we try to eliminate X (an invasive species or a drug or a process or a product), what will fill the void? If we invest more time and energy in a particular problem, what will receive less focus as a result, and how might that inattention affect the system as a whole?

We can pay to fix problems once they happen, or we can pay in advance to prevent them. What we need are more business and social entrepreneurs who can figure out how to flip payment models to support the preventive approach.

Paying for upstream efforts ultimately boils down to three questions:
  1. Where are there costly problems?

  2. Who is in the best position to prevent those problems?

  3. How do you create incentives for them to do so?

“Be impatient for action but patient for outcomes.”