What makes some systems better than others?
Understanding the hidden hierarchy of design and what makes systems truly excellent
When I was an undergrad studying Engineering at the University of Michigan, parking was one of the most pervasive problems I faced. The Michigan Engineering campus is north of the main campus, so many students had to commute either by bus or by car. However, parking was scarce, expensive, and far away from most of the campus buildings. Many students began parking in the faculty lot, which was naturally much closer. This posed a problem for the faculty who themselves couldn’t find parking of their own. Eventually, corrective actions were put in place including more communications, signs, permits, and eventually a security patrol to enforce the parking lot rules. When I visited the campus a few months ago I saw that the parking lot now had a toll gate allowing access only for vehicles with approved Identification.
Now consider another unrelated example. Seatbelts in passenger vehicles. Seatbelts were first designed in the 19th century but started being put into cars in the mid-1950s. The life-saving value was immediately perceived. By 1963, congress made the installation of seatbelts mandatory on all vehicles. Yet despite the well-known benefits to safety and the government led Public Service Announcements, people still weren’t wearing seatbelts. In the 1970s manufacturers introduced a dashboard warning light as well as a sound alarm for drivers who were not wearing their seatbelts. Then, in the 1990s, state-wide seatbelt laws became the norm. The progression of public safety communication is perhaps best demonstrated by the 1980s “Don’t be a dummy” crash test dummy commercials to the now punitive “Click it or ticket” signs and campaigns that line our roads.
The Escalation of Corrective Actions
Believe it or not, the same mechanism is driving human-system behavior in both of these scenarios. Both of these examples demonstrate the way humans try to fix things, through activities called corrective actions. Consider the similarities. Both of these scenarios are trying to solve a problem of human behavior within a system, either drivers and parking lots, or drivers and their seat belts. The corrective actions employed to change behavior ranged from notifying people of the rules, benefits, or options, then escalated to providing signs and warnings, then alerts, and finally oversight by an enforcing authority.
An important lesson, and one of the most important ideas of systems design is revealed in the difference between the scenarios. Eventually, the university was able to break free from the ongoing security surveillance of the parking lot when it introduced a tollgate. Meanwhile, police departments all over the nation are still relied on to enforce seat belt safety standards.
I implore you to look at the above examples not as we normally do. I’m not asking whether or not a tollgate should’ve been put into the parking lot or if a police officer should write a ticket. These will cause an endless debate that rests more on opinion than fact.
Instead, please consider how effective each of the corrective actions were, from public service announcements, to warning lights, sirens, oversight, and prevention. Which was most effective? Which solution feels the most definite? Which requires the smallest number of ongoing resources now and in the future?
An appreciation of systems thinking and dynamics is at the heart of these questions. To rely on systems is to build simpler, elegant designs that solve our problems in perpetuity.
Two Actors: Humans & the System
At the heart of these corrective actions is the interplay between two actors—the human and the system. To understand why some solutions endure and others don’t, we need to look more closely at how these actors interact.
Performance is built on systems. This is the main idea of this substack. Indeed, our entire lives can be understood as systems and processes with varying degrees and coordination of handoffs between ourselves, machines, and other humans. This is an observable truth, which isn’t appreciated as much as it should be.
Many processes rely on an interaction between two important actors: human beings, and machines. The two actors interact with one another in a coordinated rhythmic dance of handoffs, signals, feedback, and delivery. Usually, the human will initiate the dance, telling the machine what to do, under what conditions, with relevant information. The machine obeys, performs its role, and replies with feedback, either indicating that its role was completed as intended, or whether or not an issue occurred. This is the fundamental paradigm for how work is designed and completed.
This dance looks different depending on jobs and industries. On mining sights humans may interact with drills and bulldozers. Actuaries may interact with complex mathematical models. Doctors dance with MRI machines, vital monitoring systems, and lab tests. The specific machine and procedures are merely details. This thing called man-machine interaction doesn’t depend on a computer, or even advanced technology. Every industry relies on a coordinated interplay between humans and machines.
This is an important concept with a wide range of consequences. Innovation can find its greatest impact within this interplay. It is in these interactions where progress is made most effectively, consistently, and sustainably. To recognize these interactions is to open up a whole domain of opportunities. Opportunities of productivity, effectiveness, and resiliency.
The Ideal vs. The Reality
But even in this dance, missteps are inevitable. No matter how well-designed the choreography, systems fail. That’s where resilience comes in.
It should come as no surprise that the best systems are the ones that can coordinate this dance seamlessly. Where handoffs between humans and machines are effective and safe, with little delay in between, progress and efficiency abound. However, this ideal is rarely realized. Instead, as you’ve probably come to expect, errors propagate endlessly. Humans will err. Machines will malfunction. As we’ve already seen, there are numerous reasons why this might be so. But the fact remains that dealing with issues is the norm rather than the exception.
If it is impossible to guarantee the elimination of errors, then we must discover more effective ways of identifying issues, correcting them, recovering from them, and mitigating their consequences. Collectively, we can call the ability to perform these functions resilience. Resilience is absolutely essential in critical and unforgiving situations. Nuclear facilities and aviation are some of the first applications that come to mind, but heavy manufacturing, chemical processing, oil and gas operations all have aspects of their operations that demand resiliency. For many businesses, resilience is a hallmark of organizational maturity and operational excellence.
The Automation Trap
Too often, and particularly since the advent of Artificial Intelligence, discussions regarding capability encompass the ways and means of automating processes, removing decisions, and trusting in machines to handle all of this work for us. I’m no luddite, but this is not a sound and sustainable practice. There are several limitations, conundrums, and paradoxes that surround automation, which we’ll discuss in a later chapter. Anybody remotely aware of these issues would think twice before leaping with both feet into this sort of technological folly. But back to the present point, these actions of pursuing automation with reckless abandon will not allow organizations to develop operational excellence and resilience. In fact, excessive dependence on automation creates greater fragility within systems.
Because resiliency depends on handling errors and exceptions it necessarily involves human intervention and decision making. We should not try to run away from this or mitigate it. It would be imprudent to try to automate people out of these roles. Instead, greater resiliency is best achieved by creating efficient and robust interactions between people, systems, and their environments.
One example of this is the practice of Lean manufacturing. Originating as the Toyota Production System (TPS), Lean has made an incredible impact on organization effectiveness, market agility, and costs. Lean does not focus on mechanical and digital automation. In many instances Lean practitioners prefer pen and paper to computer screens. But its value as an operational discipline comes from the thoughtful involvement of humans within their environments. This coupled with steadfast alignment and support of organizational principles has allowed organizations that practice Lean to achieve incredible results. As a model, Lean exemplifies the necessity of developing capabilities to respond to unexpected issues. To accomplish this feat will necessarily involve creating robust and resilient human-machine systems.
What Makes a System Better?
Building upon the corpus of knowledge about system dynamics and human error, it should be clear that anybody who makes a point to design and construct robust, human-centered systems will perform with greater capability, greater efficiency, productivity, and consistency. Such individuals will be in positions to develop and grow.
But what exactly makes one system better than another? This is the critical question that determines whether your solutions will require constant maintenance and oversight, or whether they will work reliably in perpetuity.
Consider again our two examples:
The parking lot progression:
Communications and reminders
Signs
Permits
Security patrols
Automated tollgate
The seatbelt progression:
Public service announcements
Warning lights
Audio alarms
Laws and enforcement
(Still requires ongoing enforcement)
Notice that the parking lot eventually reached a solution that no longer required ongoing human enforcement. The tollgate simply prevents unauthorized vehicles from entering. The seatbelt problem, however, never reached this level. We’re still relying on police officers to enforce compliance.
This reveals a fundamental truth: solutions exist on a spectrum of effectiveness. Some solutions require constant human attention, monitoring, and enforcement. Others are built into the system itself, requiring no ongoing effort.
The Spectrum of Solutions
When we look at how problems are typically solved, we can identify a clear progression from less effective to more effective solutions:
Relying on the Individual: At the most basic level, we simply train people, give them instructions, and hope they remember and comply. This is the least effective approach because it places the entire burden on human memory, attention, and motivation.
Adding Reminders: Next, we add signs, labels, and communications to remind people what to do. This is slightly better, but still requires people to notice, read, and follow the reminders.
Providing Alerts: We can design systems that actively alert people when something needs attention or when they’re about to make an error. This is more effective because the system takes on some of the monitoring burden.
Building in Inspection: We can create checkpoints where work is verified before it proceeds. This catches errors, but only after they’ve already occurred.
Preventing Errors: At the highest level, we design systems that make errors impossible or extremely difficult to make in the first place. This is the most effective approach because it eliminates the problem at its source.
Why This Matters for You
Every system you interact with—from your morning routine to your work processes to your personal productivity habits—falls somewhere on this spectrum. Understanding where your systems currently sit and how they could be improved is the key to dramatic improvements in performance, consistency, and peace of mind.
The parking lot eventually reached error prevention with the tollgate. Unauthorized vehicles physically cannot enter. The seatbelt problem never reached this level—it’s still stuck at inspection and enforcement, requiring constant police monitoring.
In your own life and work, how many of your “solutions” are really just ongoing enforcement efforts? How many problems could be solved once and for all with better system design?
This isn’t about working harder or trying to be more disciplined. It’s about recognizing that some solutions are fundamentally better than others, and that the highest-performing individuals and organizations are the ones who understand this distinction.
The framework for understanding these differences—what we call the “Goodness of Systems”—will arm you with understanding about what makes systems better than others. In the following discussions, we’ll explore exactly how to identify where your systems fall on this spectrum and, more importantly, how to move them toward genuine excellence.
The question is no longer “How do I solve this problem?” but rather “What kind of solution will actually work in perpetuity?” The best systems don’t depend on perfect people; they make perfection unnecessary.


