[Scene: An angry civil servant from a government department confronts me, a civil servant working for GDS.]
“It’s ridiculous. The service standard is just an excuse for external organisations to make money out of government. You [pointing at me] should be ashamed of this.”
“What’s the problem?”
“This is the problem. Every team has to have all of these people in it [brandishes print-out]. You can’t have a massive team that size every time you want to do anything. You’re just making money for contractors.”
“I think there might be a misunderstanding here. This is a list of skills that you need in order to be able to deliver good services. It’s not a list of roles, or of full-time positions. In fact, there’s only one position where there is a stipulation about the amount of time they work on the team…”
“Well, it’s *&%$ing ridiculous. And they’ve demanded that we have a [purses lips as though eating a lemon] Content Designer. What even is a content designer, and who has one of them? You’re just making this up.”
“When you say ‘they’ve demanded,’ how did this happen?”
“We failed our assessment and we failed because we didn’t have a content designer. You shouldn’t be allowed to stop us releasing stuff that we’ve paid for. Who even knows what the $%^* a content designer is?”
“Ah. What about the other team that took a service to assessment last week. Did they do OK?”
“Did they have a content designer on their team?”
“No. I told you before: they don’t exist, no-one here has one.”
“So, why do you think they passed but you didn’t?”
“Well, it just proves that it’s a lottery. Someone didn’t want us to pass.”
“Actually, I don’t think it does. Content Designers work with the text in services, and that list you showed me is a list of skills, not roles. I saw both services before they went into assessment. The other team had invited someone from your marketing and communications department, who had a background in editorial, in to do some part-time work on the text in the service. That work included responding to issues identified in testing. The text in that service was clear, and presented no problems to the panel, so they passed it. The text in your service had a number of issues, including some complex words and some unclear instructions, so the panel have asked you to do some work on the text before returning.”
“Well, that’s pointless, because we don’t have any content designers here, and we’re not going to get any, either.”
“The thing is, no-one will ever know whether you or not you have a content designer, or any of those roles on that list that you showed me, unless you get things wrong. As long as the service that is presented has no major issues, the panel will pass it. They’re not going to come down and visit and do a roll call on the attendance at stand-up every day: that’s ridiculous. The only time the panel will raise concerns is if they spot something in the service that doesn’t work. Then they’ll ask whether you have people able to work on it to fix it. If the panel ever ask ‘do you have a [x] working on this service?’, that’s when you’re in trouble.”
“Well, I can’t see the point. This is all ridiculous and you’re just making sure that certain people can make money out of it…”.
Responding to governance outcomes
The Service Standard – and the inter-departmental politics around it – are in a different place today than almost ten years ago, when this exchange (slightly edited for brevity) happened. This post is not about that.
I’ve recounted this story to so many people and teams over the years that it’s become a way for me to illustrate some truth about governance, and the way people react to the idea that someone else can have an influence on the success of the work of your team.
I think there are three human responses that occur in my example, that often take place, and which we are all guilty of at some point. I’m writing them down here not to point fingers, but to describe a bit of what – in my experience, anyway – is going on in discussions like this.
1. My dad’s bigger than yours
“It’s ridiculous. The service standard is just an excuse for external organisations to make money out of government. You should be ashamed of this.”
When there isn’t an established relationship of trust, any governance outcome is touchy. An outcome that isn’t what you want, or even that is what you want but also suggests you’ve missed something, can hit a team in a uniquely vulnerable place. This is particularly the case when the team isn’t completely secure in their identity or existence; for example if they’re struggling to establish themselves internally, or when there isn’t an internal consensus about the direction or purpose of the team.
At this point, the most common immediate reaction to a less than 100% perfect outcome is not much more sophisticated than ‘my dad’s bigger than yours’: the whole approach to governance is wrong, the panel were incompetent/didn’t listen/were too inexperienced/weren’t wearing suits, your bosses will be talking to their bosses and then they’ll be sorry, it’s only a matter of time until the tables turn and then you’ll see what it feels like, it’s a scam designed to make someone else some money…
For some, this process goes through the SARAH curve, where immediate shock and anger turn into an eventual acceptance – that the result took place if not that it was completely valid. For others, time fosters and magnifies resentment and bad feeling. I’ve heard many stories from people about minor perceived injustices from assessments years ago that still rankle, I’ve led assessments where the complaints from the service team have continued for some time and I’ve even had people join a service team for an assessment that they’ve admitted beforehand has no possible chance of passing and yet still come out boiling with anger that the team failed the assessment.
What’s the difference between acceptance and continuing resentment? I think it’s about team dynamics and how teams are facilitated through the process of managing a result that’s not what they want. If this goes badly then blame happens, the resentment multiplies and people reinforce each other’s perceived injustice; if it goes well, then the team can concentrate their energy on taking the recommendations and moving forward. I sometimes wonder whether the Prime Directive should be appended to Service Standard assessment results because failures are rarely, if ever, down to an individual not doing their job. (Some of the best work I’ve seen presented at assessments has resulted in a ‘not pass’ because the service isn’t ready for the next phase and needs to resolve a few issues first: that’s the most frequent failure reason in my experience and is a judgement on timescales, not the quality of work.)
The personal validation of an assessment pass is a heady drug: people stop you in corridors and tell you how well you did, everyone smiles at you for a bit and you might even get a shout out in a corporate newsletter somewhere. In retrospect, I certainly have been guilty of paying too much attention to that and not enough to preparing teams for the process, helping them absorb a result that they didn’t want and moving forward.
Even if you have a case against the outcome and can mobilise some heavyweight support in favour of it, it’s worth considering losing the battle and winning the war before you go charging in over the dishes. I once had a service team I’d prepared fail a service standard assessment on a couple of points. For what it’s worth I initially thought that they’d been robbed and, on reflection, agreed with them that they at least hadn’t had the rub of the green. They were offered the chance by the panel to return 3-4 weeks later to resolve those points. I encouraged them to accept this offer and get it fixed. The department spent months arguing over the result instead. No process involving humans is ever perfect, and there is always something you can learn – but only if you can channel your energy into this and away from looking into more reasons why it wasn’t your fault.
2. Goodhart’s law
“This is the problem. Every team has to have all of these people in it [brandishes print-out]. You can’t have a massive team that size every time you want to do anything.”
The second human response to governance is to try and control, to codify exactly what must happen to produce the required result. Once a team stops interpreting the intent of the governance and creates a checkbox approach, there is plenty of room for misunderstanding. “But we did user research every single sprint, so I don’t see how it’s possible that we failed on the understanding of the user… and we did twice as much design work as we did on the service that passed last year so we can’t have failed on that, either.”
In the example I started with, this desire to create a checkbox approach led to someone insisting that the Service Standard always demanded a team with a full list of full-time roles. (It doesn’t.) There’s a lot of other examples that I could give here: people who think that the Service Standard dictates how long a discovery phase should take, for example. (It doesn’t do that, either.) I think we’re now at the point where any claim that ‘the Service Standard says…’ is subject to some form of Betteridge’s law, but all of these approaches come from a desire to try and control the governance and avoid asking difficult questions.
Goodhart’s law says that when a measure becomes a target, it ceases to become a good measure. My version of this is that ‘as soon as a measure becomes gameable, it ceases to be useful’. Good governance approaches – and especially ones that cover such a multitude of potential use cases as the Service Standard – avoid being gameable by sitting at a higher level than checkboxes. Any team that tries to reduce ‘what is the user need and how can you prove this?’ to checkboxes will struggle.
3. Learned helplessness
“Well, that’s pointless, because we don’t have any content designers here, and we’re not going to get any, either.”
The final human response to governance is to diminish the importance of any result by emphasising that you don’t have the power to make changes. In this approach, it’s the people who have the money, or the people who sit on the board, or the people who wrote the policy, or… anyone but the service team themselves.
This is not to deny that we all work within sometimes significant constraints, or to legitimise an aggressive ‘I’m not interested in problems, only solutions’ approach. This is to suggest that it’s always easier to point the finger elsewhere than it is to make the hard yards of working out what it is that you can do.
In the example we started with, it was true that the assessment report recommended a content designer and it was also true that the department in question didn’t have any content designers. Other teams in the same organisation had worked out how to find these skills in other places, but the service team weren’t interested in that because they’d got fixated on the concept of a content designer as an immovable block that someone else had to resolve.
Winning at governance ultimately means taking any outcome on board and objectively processing it, avoiding the temptation to reduce complexity to tick-boxes and practicing honesty as a team about what you can change and what you can’t. None of this is easy, but the better you are at handling this, the better you are as a team. Any team can handle expected outcomes, only a good team can handle unexpected outcomes.