“I’d like you to predict exactly what food you’ll need to buy for an elaborate family gathering happening five years from now.”
This was the thought experiment a colleague recently included as part of a workshop. He chose this challenge because the ensuing discussion can't help but slam straight into the wall of “big upfront design.”
This term was popularized during the rise of agile processes and the (somewhat oversold) death of waterfall project management. Put simply, it describes the fallacy where you believe you can predict the future state of:
- The context or environment of a problem
- What people will need by way of a solution
- How much effort will be needed to produce a solution
By contrast, agile thinking says, “your first guess about what you need right now is probably good enough to get started — try something and see how the universe responds.”
When you hear someone accuse someone else of “big upfront design,” what they’re saying is “put down the crystal ball, Nostradamus, and do the simplest thing that’ll teach us what we need to do next.”
Failure Tolerance
One piece of psychology at play with big upfront design is failure-aversion. We want to be able to tell ourselves that, if something does go wrong, at least we did everything we possibly could to protect against risk.
The problem culturally (and practically) with this kind of thinking is that it trades speed, agility, and creative freedom for theoretical safety.
Think about this sequence of events:
- You identify that failure X would be pretty serious or embarrassing.
- You decide you need a policy to prevent X.
- That policy will have side-effects — for example, it may slow you down.
- That slowness will be, in itself, a failure (Y) but it’ll be far harder to notice.
- So you believe that creating the policy is a positive step because failure X is more apparent.
Put simply, we don’t correctly balance the cost of a policy against its benefits because we focus so hard on the headline failure the policy is meant to prevent.
For example:
- Having someone guess a password for our system would be really bad.
- So, let’s make everyone use 12-character passwords where at least one character must be a punctuation mark.
- Now, we’ve just made passwords considerably harder to remember.
- People have to make more login attempts (slowing them down), get locked out more often (really slowing them down), and have occasionally started writing their passwords on sticky notes.
- So you’ve avoided guessable passwords (the original risk) at the cost of everyone’s time, happiness and possibly at the cost of actual security.
In this example, a better approach might have been to introduce a password-strength meter so that people could make their passwords as strong as possible while still serving their own memorability needs. Then the policy-policy would be to trigger more stringent measures if breaches occurred as a result of password-guessing.
Big Upfront Policies
So what can we do? In keeping with this theme of “doing the simplest thing,” engineers are fond of the phrase You Ain't Gonna Need It (YAGNI). They use it to prevent over-engineering a system. At work, policies often feel like acts of over-engineering.
For example, an employee comes to you and asks whether they can expense a book. At your company, nobody has ever really made this kind of request. There's a voice somewhere in your brain saying, "I can't say yes to this until we have a proper expense policy. Otherwise we might end up with an unfair system that's impossible to manage."
That voice represents your instinct to lean into big upfront design. This soaks up your time and may not deliver any meaningful extra value. What you need is a sense to only formulate policies when you have clear evidence that they will have a much-needed effect. You need a policy-policy.
To take the book expense example, why not try, "This is the first time this has happened. We might need a proper policy in the future. We'll stick with being ad-hoc until the monthly cost passes $200 or if more than 10 people make these requests in a quarter."
Putting tripwires like this in place stops you from having to worry about a runaway problem that might occur.
Very often, a key move is to rely on agreements rather than actual policies. In other words, jumping straight to a policy often ignores that we can achieve good outcomes simply by relying on the humanity and common sense of our colleagues. For example:
- You’ve seen a few people take a noticeably large amount of the free snacks you keep in the kitchen. You could institute a strict system of allowances with some kind of controlled dispensing. Or you could just recognize that some people want to eat more snacks than others and cosmic fairness isn’t really an important goal here. So, as long as the monthly snack budget doesn’t exceed X and people can generally get a snack if they want one, you don’t really need to do anything.
- Some people would like to switch to working late hours but you have experience that this disrupts creative work on the team. You could try to get ahead of potential arguments and unrest with a detailed policy on how / when they should make themselves available for work. Or you could just say “support each other as best you can, escalate quickly if you need our help to resolve specific situations”. If you see lots of escalations, you can revisit the idea of a more detailed policy or agreement.
Trying it Out
When we occupy operational and compliance roles (such as those on HR teams) we get used to the idea that, if we need a certain kind of behavior from people, we need a policy. How many of those policies might be both deferred and then (when truly needed) be improved by the extra time afforded to learn more about the problem?
You can (and should) push to make your organization as agile and responsive as possible by placing the fewest constraints you can on the people who work there. As part of that push, try implementing policy-policies that give you confidence to retain simplicity and defer complexity.