Outcomes, not Distillation – An approach to templates

A former coworker was investigating starting up a scotch brand and decided to fill me in. One of the difficulties involved is that scotch typically sells for more once it ages - but it's a tough pill to swallow to start up and then have to wait 10 years for your first product!

There are two approaches often taken to mitigate this:

1) Start up by making spirits that don't involve an aging period (ex. Vodka), and use that money to keep the business afloat until the barrels are finished aging.

2) Start up by making a blended scotch, by purchasing from other distilleries and making your own custom blend.

Age labelling regulations state that for approach 2, you must label the age of the scotch as the age of the youngest in the blend - so if you wanted tomorrow to offer a 5 year scotch, you could start by selling a blend of 5, 8, and 10 year. As your batches matured, you'd gradually replace each year with your own batch until 10 years later, it's no longer a blended scotch and you are selling your own 10 year old scotch.

I was remembering this story when thinking about templates. 

Templates have been a cornerstone of every midsized (100+) organization I have worked in. They are an incredible way to help someone get past the blank page problem, but over time they proliferate and without careful attention in the early stages of their development, they can do more harm than good – not to mention the ongoing maintenance cost involved with maintaining them over time. There are some pretty good blended scotches out there, but when it comes to templates starting from a blend often hampers how great your results can be.

A common default for producing templates is to distill a template out of existing examples. You might commission a team with "produce a good template for how we write project updates"; to help them out you offer them 4-5 examples of great ones for them to work from. YOUR goal is "level up how we communicate with clarity on our projects", but THE TEAM’s goal often becomes "distill a template from these great artifacts" - and too often you get a bad blend!

The team has been tasked to deliver a template, and they have 5 examples that are great to work from. When they look at them, they are often REALLY different - so surely, the thing that only shows up in one or two of the examples isn't as important as the thing that shows up in all of them, right? In the end, you end up with a distilled template that is worse than the sum of its parts - only capturing the shared common attributes for your examples, and not recognizing and highlighting the “why” that makes each of the examples really great.

Having to use templates built in this way is demoralizing - the template sucks, and the things made with it don't seem like they are very good either; often you can't do work at the quality you hope to within the constraints of the template. Your brain shuts off and you drop into "just get this filled out and I am done" mode, instead of engaging with the deeper problem you're trying to solve and use the template for. Over time, this is a path to mediocrity.

Before you delete all your templates because some random guy on the internet told you to, let's talk about how to make this better. YOU know what the goal of the final artifact is - get clear on that and build towards it! If you are asked to develop a template, start differently by examining:

1) What is the outcome we are trying to drive by developing this template? (Often "write a template" is an XY problem)

2) For the exemplars of what doing this really well looks like, what makes each of them good? What is shared that is good, and what is unique that is good?

3) What conditions led to each of them being good, and how can we support and reinforce those for folks following this path?

For the folks I consider to be experts at this, something I’ve noticed about the way they work is that they use outcome-oriented template authoring, rather than lowest common denominator – often working backwards from questions like the examples above. If you start from there, you might find you don’t even want a template at all!

Thanks to Allen Pike and John Brennan for suggestions and early draft feedback.

Raising the Floor, by Lowering the Ceiling

The last time I saw Rami Ismail speak, he did a little flourish with the audience that went like this:

Rami: "Ok. I want all of you to think about a platforming game."

audience all look thoughtful

Rami: "Alright. What I had in mind was a 3D platforming game like Banjo Kazooie, how many of you thought of that?"

It turned out about 60% of the crowd had apparently been thinking about 2D platformers, myself included. It was a demonstration on communication that has stuck with me ever since, especially over time as I have notice the ways that specificity in communication comes with tradeoffs and ambiguity can produce surprises.

Once you are attuned to just how differently requests can be interpreted, there's a natural inclination to focus on clarity and specificity when communicating with others. The more you clarify the specifics, the higher you raise the floor of what someone delivers. But there is a caveat: **specificity and clarity can also lower the ceiling of what someone can surprise you with**.

In a previous life, I once asked a team member who was eager to grow to be in charge of defining the path forward on a messy refactor we needed to do in order to enable something new. I did not specify when I hoped to hear from him. In my head, I was expecting to get a plain text bulleted list that looked like this:

- Remove code that does X
- Add code that does Y
- Migrate to using Y code

I waited for about a week, and then got frustrated and asked to see the plan. You can imagine my surprise when I got handed an 8 page Google Document that went down to the line number on changes we needed to make!

By not raising the floor on that team member by telling him what I was expecting ("plain text bulleted list, under 10 items"), I was quite negatively surprised by what he ended up delivering and how much work had gone into it. But if I HAD raised the floor, what are the odds he would have diverged from my expectations and instead delivered that document?

This can be really powerful, especially in situations where you explicitly hope to let someone run free and surprise you with what they can do. When that is not what you want, it is worthwhile to understand what you DO want, so you can be specific. If I spend time raising the floor with someone by being specific, they are far less likely to disappoint me by showing up with less than I expect... but also less likely to surprise me with something that I never would have thought of. As time went on, getting clearer on when I was looking for one or the other (or more often, something in the middle) was helpful to understand how to communicate with others.

This is similar to how you might use Task Relevant Maturity (originally from High Output Management, I believe) to define how you work with a given person. If TRM is low and consistency is important, raising the floor is beneficial. If TRM is high, raising the floor puts you at risk of not even realizing what opportunities you miss when someone sticks to the lanes you have defined for them.

So yes, sometimes it’s important to raise the floor. It might be necessary to constrain the solution space to set somebody up for success. But try to be intentional when you do it, and keep in mind the potential upside of surprise - don't lower the ceiling unless you have to.

Thanks to Allen Pike and John Brennan for suggestions and early draft feedback.

Using your test suite to educate with guard rails

Using your test suite to educate with guard rails

As a codebase grows and ages, writing comments on dangerous parts of your codebase stops scaling well — comments easily drift out of date if someone forgets to update them and you’re relying on others (who may be in a hurry) to pause and read your comment before making their changes. You won’t always have time to go back and refactor an area of the codebase that has become dangerous, but you can at least minimize future pain for others by putting a guard rail around it.

Here’s an example:

Without a test like this in place, a developer workflow might look like this:
  • Open pull request that changes constant value.
  • Pass code review.
  • Merge pull request, take down production.

By adding this test, the workflow we are trying to encourage is:

  • Open pull request that changes constant value.
  • Pass code review.
  • Wait, CI is failing?
  • Read and understand error message. Consider whether changing constant value is required. Don’t take down production!

Because these tests are a little different, there are a few rules that go into writing a good one compared to your average test:

  • Write a clear, helpful failure message (ideally prose and multi-lined). You want the impact of the change someone is making to be SUPER CLEAR, so do what you have to in order to communicate the impact of the change. I’ve written versions of these tests that include markdown tables demonstrating what changing a constant will do — the sky’s the limit, as long as you can make the impact of someone’s change front and centre. Do whatever you need to in order to satisfy this, and spend a lot of time on making sure your failure message is clear.
  • Be as self contained as possible; duplicate any value that you plan to check. If the test loads a value dynamically, you’ll lose the benefit of the test (making sure a value doesn’t change without intent).

By migrating things I’d normally write a comment on to instead be executed by the test suite, I’ve been able to build better guard rails into our codebase so that newcomers get surprised less often while taking their changes to production. That failing test acts as a self documenting guard rail, and it has the added benefit of only appearing when someone is interacting with the code that I wrote the documentation on which means they get feedback as close as possible to the change they are making.

What are the guard rails in your codebase?



This post originally appeared on Clio Labs; thanks to John Brennan for reading drafts of this.