Efficiency vs Innovation - Working for Efficiency
4 min read

Efficiency vs Innovation - Working for Efficiency

Many people have built their careers off the never-ending pursuit of efficiency. How can we do this better? How can we do this faster? How can we do this with fewer headaches? How can we enable more people to do this? These questions come to mind when we're working for efficiency, but it's all predicated on the understanding of what "this" is in the first place.

In the last post, I described efficiency as doing what we already know how to do, but better. We know "this" is the right thing to do and that "this" will yield the results we're after, so we need to figure out how to do "this" faster, cheaper, better, with less pain, and have it done by more people. We know "this" works, we just want it to work better.

Efficiency-minded individuals will always be able to find ways to optimize an existing system, but there are traps! I'd like to talk about three:

  • The more mature the process, the more radical the optimization needs to be in order to have a meaningful payoff.
  • Optimizing locally vs across the entire system.
  • Working for efficiency in a process that isn't effective.

Trap 1: Optimizing Mature Processes

In many cases, optimizing processes is a worthwhile pursuit. We can make a change at Point A, and it has positive impacts for the rest of the process or for everybody who participates. The more mature the process, the more likely it is we've made continued optimizations over time. On the other hand, the more mature processes are likely to see less payoff for each optimization.

In the beginning, we may make a tweak that improves efficiency by 50%. This is fantastic, and it's addicting! We're making the system better. The next change, however, is more of a project than a tweak, and it only improved efficiency by 35%. Over time, the each optimization gets harder, requires more energy, and has a lower level of operational improvement than the tweak before. This isn't a law of nature, but it tends to follow human behavior...we focus on the low-hanging fruit that make the most impact and we do those first. Harder projects take more time, so we do those later. By the time we get to them, we've already made a lot of changes that improved the system.

Is this universally true? No, not at all. Circumstances come along that completely break this mental model I outlined above, such as the introduction of new technologies. As an example, I recently started recording videos for an online course I'm building. I had been working on improving my recording and editing pipeline and got to a point where it was moving pretty quickly. Continued optimization of this process was no longer worth the effort, as it distracted me from the actual task I was trying to get done (recording videos). I'd say I'd managed to cut my editing time in half through my focused improvements.

Then I found Descript. Descript is a technology that completely disrupted my approach. By adopting this tool, I cut my time in half again, but I also started a new optimization loop based on my new process. I imagine I'll get to the point with my new process that each new tweak yields less operational efficiency than the tweak before, so I'll settle into a new normal until the next major disruption comes along.

Trap 2: Optimizing Locally

The more important the outcome, the more complex the process is likely to be. This probably reads like common sense, but it's possible to forget the complexity when we start optimizing for efficiency. Put simply, we can make an improvement to Step 2 that causes Step 5 to take four times as long and utilize twice as many people. Sure, we optimized Step 2, but is the entire process better?

This is a hard discussion to have with many teams and organizations because they don't have clear visibility of the entire system. I've also seen many, many cases where it's not clear what the meaningful metrics of the system are, which makes it impossible to tell if we're making any meaningful improvements with our optimizations.

Consider a software development process. The goal of the system is to deliver value to users and stakeholders as quickly and frequently as possible. In one step, the development team is optimizing for deploying code frequently. In another step, the pipeline team is optimizing for stable deployments. The organization has invested in training and tooling to help these two teams go faster, yet the two core operational metrics (deployment frequency and cycle time) have actually gone backward. Turns out, there's a team that has to manually test each code deploy before it gets to the pipeline team, and since the development team is now releasing code at a much faster pace, this testing team is weeks behind and the backlog of code to test continues to grow. The operational changes for the development team actually slowed the system down because the organization looked at local optimization instead of system-wide optimization.

Trap 3: Optimizing the Ineffective

There is nothing so useless as doing efficiently that which should not be done at all. - Peter Drucker

I can recall a conversation at one of previous jobs where I proudly explained how much more efficient our team had become at working through our task backlog because of how we started allocating work and writing user stories. I'll never forget this particular conversation because later that week, we got customer feedback that all the stuff we were working on was utterly useless for them. We got really good at building stuff nobody needed or wanted. Whoops.

The Peter Drucker quote above has been used over and over again, but it's worth repeating because I don't think we spend enough time stepping back to make sure what we're doing is the right thing to do in the first place. As a hiring manager, this is one of my favorite parts of onboarding new employees. They bring fresh eyes to our established systems and I encourage them to ask the question, "Why do we do this?" When I don't have a well-formed answer immediately, it's usually worth re-evaluating the process to make sure it's work that matters.

If you're working in a sales environment where most reps aren't hitting their number, the default behavior is to try to optimize the sales play. What if the sales play is wrong? What if we're building a sales engine based on a blueprint that's fundamentally flawed? All the optimization in the world won't help you hit your number.

In these scenarios, working for efficiency won't lead us to the results we're looking for. We need a different playbook. We need to innovate! I'll be sharing some thoughts on working for innovation in a later post.

Enjoying these posts? Subscribe for more