Developers don’t belong on an assembly line

As we all learned in history class in high school, the Industrial Revolution gave birth to machine-driven manufacturing processes that greatly increased production, lowered the costs of manufactured goods, and raised everyone’s standard of living. It even changed how society was structured, as it massively expanded the middle class, and brought home the idea that technology could improve our lives.

One of the changes that the Industrial Revolution brought about was how we worked. The notion of economies of scale led to larger and larger factories. As things moved forward, we realized that productivity could be measured easily. Workers screwing the caps on toothpaste tubes could be readily observed and successfully attached caps counted. Efficiency experts like Frank Gilbreth of Cheaper by the Dozen fame found ways to make factories run more efficiently.

One could argue that the Industrial Revolution ended the day the ENIAC computer was delivered to the US Army, signaling the beginning of the Digital Age. Since that day, the software development industry has struggled with managing software development effectively.

The root of that struggle comes from “fighting the last war.” A well-known concern in the military is spending too much time thinking about previous wars when preparing for future ones, relying on outdated technologies and strategies that were effective in the past but end up not being so going forward. (Think Pickett’s Charge in light of the development of the highly accurate rifle.) The software development industry is no less guilty of this.

This tendency proves itself out in any number of ways. We’ve all seen the trope of the factory whistle blowing and people moving through the entrance of the factory “punching the clock,” followed by managers wandering the floors making sure that caps are being efficiently placed on cans of shaving cream. That made a certain amount of sense. Workers were measured by their output, and output was easily measured. Workers standing on the assembly line for a standard shift became a pretty good indicator of success. More feet spending more time planted along the assembly line meant more production.

More butts, more software

Sadly, we translated this into butts in chairs in the software development business. This most recently has manifested itself by companies forcing folks to come back into the office full-time. Just like workers on the factory floor, software developers were expected to be at their desks pounding away on their keyboards producing… what exactly? Code? Features? 

A screwdriver factory’s success is measured by the number of screwdrivers it can produce given a certain amount of input. So, it seemed reasonable to measure what software developers produce. We all know what a disaster measuring Lines of Code turned out to be. But what, exactly, does a software developer even produce? We struggle with measuring individual developer productivity, in large part because we can’t answer that question well.

I guess managers figured there must be something to count happening out there in the cube farm, and the more time people spend there, the more of whatever it is they produce will be produced. 

Fortunately, one of the silver linings of the awful pandemic was an awakening for these old-school managers. Sometimes the best thing a software developer can do is stare at code for three hours and then spend fifteen minutes typing. Or even better, a developer might remove 1300 lines of spaghetti code and replace it with 200 lines of an elegant and superior solution. Or maybe a developer spending a week building something “the right way,” because saving the team countless hours in future maintenance time is time well spent. 

Perhaps the worst carryover from the industrial age is the notion of time tracking. Managers feel a strong urge to measure something, and “time spent on a task” becomes a powerful shiny object that is hard to resist. Thus, teams are often required to track the amount of time it takes to fix each bug or complete individual tasks. These times are then compared between developers to provide some measure of productivity and a means of determining success. A lower average bug-fixing time is good, right? Or is it?

The worst metric of all

Time tracking of software developers is — in a word — awful. No two bugs are alike, and inexperienced developers might be able to quickly crank out fixes for many easy bugs while more experienced developers, who might generally be given the more challenging issues, take longer. Or maybe the junior developer is assigned a bug that they can’t handle? Worse, time tracking encourages developers to try gaming the system. When they’re worrying about how long a task might take, they may avoid those that might take longer than the “estimate” and all manner of “non-productive” activities. 

Don’t we have to admit that there is no way to determine how long any particular software work unit should or will take? Having to account for every minute of the day merely creates bad incentives to cut corners. It also can make a smart, capable developer feel anxious or concerned for taking “too long” to solve challenging problems that were supposed to be an “easy, one-line change.”

We know how long it should take to apply labels on mayonnaise jars. But no one can predict with great certainty how long it will take to repair a non-trivial bug or implement a new feature. Forcing people to track the time on tasks of indeterminate duration has many downsides and, well, basically no upsides. 

Digital products are not physical products. Software is a product utterly unlike anything seen before. Thinking that creating software can be managed and executed using the same techniques learned during the Industrial Revolution is fundamentally flawed. Fortunately, more and more organizations are no longer preparing to “fight the last war” and have realized that what worked on the assembly line in a car factory doesn’t work in a software development shop.

Go to Source

Author: