Why Simple Thinking Produces Bad Decisions
Of Linear thinking and Nonlinear Reality
Here is a scenario you have probably seen play out before.
A manager notices that their team’s output is slipping. The numbers are run. The articles are read. The conclusion is arrived at: people aren’t motivated. An incentive program is launched to boost motivation. Cash bonuses for hitting targets. Public recognition for top performers. A leaderboard on the wall.
It works. For about six weeks. Then performance drifts back. A few of the best people quit. Customer complaints tick up. The manager is baffled. He gave people exactly what they needed. The problem was fixed.
Except it wasn’t. Only the symptom was treated. The manager misdiagnosed what was really going on, using a model of reality that was never accurate in the first place.
Now extend that same logic outward. A parent decides their child is struggling in school because they aren’t trying hard enough. A city council builds a new highway to reduce traffic congestion. A person decides the reason they can’t stick to an exercise habit is that they lack discipline. In every case, the diagnosis feels self-evident. In every case, the intervention is launched with confidence. And in every case, a short time later, the results are mysterious, disappointing, or actively worse than before.
This is not a story about bad intentions. It is a story about our poor conception of reality. The culprit, in almost every case, is the same: linear thinking applied to a nonlinear world.
Simple Thinking as Linearity
Linear thinking is not stupidity. It is not laziness. It is the natural cognitive default of a mind that likes to find fast, coherent explanations for a complicated world. Heuristics. And for a very long time, in a very large number of situations, it works well enough.
The structure of linear thinking looks like this:
Simple cause → Simple effect
Variables act independently. Effects are proportional to inputs. Outcomes are, in principle, predictable. If you want more of Y, apply more of X. If performance drops, find what changed and change it back.
This is the algebra of everyday decision-making. y = mx + b. Add one unit of effort, get one unit of output. Remove one source of friction, see one unit of improvement. You probably learned it in about 7th grade, and you haven’t gone much further in appreciating complexity and nuance since. The model is clean. It can be explained in a sentence. It can be justified to a room full of people in a ten-minute presentation.
And that last part is not incidental. That is precisely why the model is so persistent and valuable.
Linear thinking reduces uncertainty. When a problem can be traced to a single cause, the world feels knowable and manageable. When an outcome can be explained by a single variable, credit and blame can be assigned cleanly. Morale is the problem. Motivation is the solution. Talent is the differentiator. These explanations feel convincing because they are simple. We should not confuse that for a reason to trust them.
The Gap Between the Model and the World
The physical world does not run on linear equations. We figured this out a long time ago. We invented calculus because simple proportions couldn’t describe how objects move through space. We developed thermodynamics because heat and energy don’t transfer in straight lines. We built the disciplines of fluid dynamics, ecology, epidemiology, and economics because the systems in those fields interact, feedback, delay, amplify, and collapse in ways that no simple cause-and-effect model can capture. Decision making and performance, in almost any domain of human life, follows the same inconvenient logic
Think about what actually determines whether a child learns to read well. It is not simply their intelligence, or effort of the child, or their teacher’s skill in isolation. It is the interaction between those things and the child’s home environment, the quality of sleep they’re getting, the design of the classroom, their inbred intelligence, the feedback loops built into their efforts, the pace at which new material is introduced, the emotional safety of the learning environment, and dozens of other variables operating simultaneously.
Remove one of those variables. Improve it in isolation. The system barely notices, because you haven’t changed the overarching and governing structure. You’ve adjusted one dial on a machine with a hundred dials, each of which affects the others. We know this to be true and we know that this is why so many programs committed to social good fail to move the needle. The focus on what is controllable to these enterprises are far too small to make a difference.
This is the rule, not the exception. Traffic congestion worsens when you add highway lanes, because new lanes induce new drivers. Antibiotic overuse creates resistant bacteria. Drought relief in certain ecosystems triggers population spikes that cause further instability. New variables don’t just affect a target outcome. They ripple through an entire system, interacting with everything already in motion.
Interactions. Feedback loops. Delayed consequences. Nonlinear effects.
This is the actual structure of the world we are trying to make decisions in.
Systems Thinking as Complex Modeling
Systems thinking does not simplify the complexity of life. It does not collapse multiple variables and nonlinear interactions into convenient main effects and non-linear models. complexity. That is, in fact, the whole point.
Where linear thinking focuses on individual variables and their direct effects, systems thinking focuses on structure. It asks not just what caused a result, but what kind of system produced this result as its natural output.
The distinction is worth laying out:
Linear thinking asks: Who performed poorly? What changed? What variable should we adjust?
Systems thinking asks: What structure produced this behavior? What feedback loops are operating? What constraints are shaping individual choices before individuals even make them?
This shift in framing is not merely cosmetic and its effect produces entirely different diagnoses and interventions. The manager who thinks linearly fires the poor performers and hires new ones. The manager who thinks systemically asks what constraints are making the desired behavior harder than it needs to be, and removes them. The city planner who thinks linearly adds more lanes to the highway. The planner who thinks systemically asks whether new infrastructure will change commuter behavior in ways that recreate the original congestion at a larger scale. The parent who thinks linearly scolds the child for not trying harder. The parent who thinks systemically asks whether the homework environment, the sleep schedule, or the feedback the child receives is making sustained effort unnecessarily difficult.
Same problem. Completely different diagnosis. And in most cases, a completely different outcome.
Why We Prefer the Wrong Model
If systems thinking is more accurate, why isn’t it the default? The answer is partly practical and partly psychological, and both parts matter. The practical side of the problem is that systems thinking is harder. It requires holding multiple variables in mind simultaneously. It requires patience with delayed effects and indirect causes. It requires resisting the pull of the most visible explanation and looking instead for structural ones. That takes time, cognitive effort, and a tolerance for ambiguity that most environments don’t reward.
You can also check out the risk associated with incorrect mental models.
The psychological problem is that linear explanations do something systems explanations can’t: they tell us who is responsible. “Motivation was the problem.” “The team lacked talent.” “Leadership was weak.” These explanations are deeply satisfying because they locate cause in a person. They allow us to assign blame, to take credit, and to believe that the right individual, in the right role, with the right attitude, would have produced a different result This belief is, for most complex outcomes, a diagnostic illusion. But it is a comforting one. And comfort, not accuracy, is what most explanatory models are actually optimized to provide.
There is also an institutional dimension. Organizations reward confident explanations. Meetings do not go well when someone says, “I think the results are an emergent property of several interacting feedback loops operating on a delay, and I’m not sure which lever to pull.” They go much better when someone says, “Motivation is low. Here’s the incentive program.” The second explanation is actionable. It is presentable. It produces a slide deck. The fact that it may be wrong is rarely discovered until long after the meeting has ended.
The Goodness of Systems Design
There is a more precise way to frame what systems thinking is actually trying to accomplish. A system’s goodness is not just a measure of how well it performs at its peak. It is a measure of how well its design mediates performance across all of its interactions: the feedback loops, the delayed consequences, the nonlinear effects, the ways different components amplify or dampen each other over time.
A well-designed system makes high performance normal. A poorly designed system makes high performance heroic — (Yes a dash, no not a bot) meaning it requires exceptional people operating under exceptional conditions just to produce ordinary results.
The Red Bead experiment, developed by the statistician W. Edwards Deming, demonstrates this with uncomfortable clarity. Workers are asked to minimize defective output. They are incentivized. They are threatened. The best performers are rewarded, the worst are let go. None of it matters. The proportion of defects is determined entirely by the system. Effort, motivation, and talent are irrelevant. The system is the constraint.
“A bad system beats a good person every time. No contest.” — W. Edwards Deming
What Deming was diagnosing is exactly what linear thinking obscures: that for a very large class of outcomes, the system structure determines the result before individual effort enters the picture. Blaming the person is not just ineffective. It is a distraction from the actual source of the problem.
The goal of systems thinking is not to excuse poor performance or to eliminate individual accountability. The goal is to ensure that when you intervene, you are intervening in the right place. Most of the time, that place is the structure, not the person.
A Better Set of Questions
The shift from linear to systems thinking is, in practice, a shift in the questions you ask before making a decision. Instead of asking how do I motivate this person, ask what structure makes the desired behavior the path of least resistance. Instead of asking who is responsible for this failure, ask what feedback loops were operating and what they were optimizing for. Instead of asking how do I fix this variable, ask what this variable is interacting with, and whether improving it in isolation will produce the effect I expect.
These are harder questions. They take longer. They do not produce clean, attributable answers. But they produce something more valuable: decisions that survive contact with the actual complexity of the world. A useful heuristic: when a decision seems obvious, when the cause seems clear and the solution seems straightforward, that is precisely the moment to pause. Obvious diagnoses are often the most dangerous ones, because they foreclose the investigation before it has actually begun.
What This Changes
Traditional thinking makes decisions feel easier by making the world seem simpler than it is. Systems thinking makes decisions more accurate by refusing that simplification. One gives you a narrative that is comfortable and presentable. The other gives you a model that is honest and useful.
Neither approach guarantees the right answer. Systems are genuinely complex, genuinely uncertain, and often genuinely resistant to understanding. Even with the best frameworks, decisions made with system models will sometimes fail.
But there is a difference and it matters. When decisions are made with simple, linear models, the failures tend to look mysterious. When incentive programs collapse, when traffic worsens after highway expansions, when more effort produces less result, the linear thinker is genuinely confused. The data doesn’t fit the model. When decisions are made with system models, the failures tend to look like they were discoverable. You can go back, trace the feedback loop that was missed, identify the interaction that wasn’t accounted for. The errors are legible. And legible errors can be corrected.
That is the advantage of systems thinking. Not that it makes outcomes certain. But that it makes them comprehensible.
And comprehensible outcomes, unlike mysterious ones, can be designed.
You Read the whole article!
Please:



The expectation that something should be simple and reproducible is actually a wild idea when you stop and think about it. Yet this is usually our starting point and expectation.
My intuition is that you couldn't build what you describe because at some point you'll need to connect some disparate part of the organization.
This is why organization as a force always seems to be expanding. Unorganized entities always pose a limitation to the current organization and requires it to be subdued and integrated.
Great article but…
Every system you describe is preserving something that makes certain behaviors “natural outputs” of the structure. Until that is identified, even systems thinking risks becoming a more detailed map of the same problem space rather than a way out of it.
Systems don’t generate behavior freely. That’s why so many “system-aware” interventions still fail.
Until you figure that out, you’re not really diagnosing systems—you’re just describing their behavior in higher resolution.