Prologue
A meeting room in a UK government department. Being discussed is the integration of a vendor’s system into the digital service managed by the various people on the other side of the room. A number of questions are raised about the approach to the integration and the user experience of the service once the new flow is included. The head of the vendor team considers the questions, looks pityingly at the people on the other side of the table, sniffs, and says “we’ve been doing this for over twenty years. I suggest that you trust the experts on this one.”
The problem with user needs
“Democracy is bigger than user needs” [1], “Putting users first is not the answer to everything”, or even “the short-term satisfaction of ‘meeting user needs’”. I’m more than sympathetic to the views in these posts, but I’m also concerned.
When the situation detailed in the prologue happened we were able to challenge the approach. We uncovered some significant problems for the user that would otherwise have gone into the final version of the service. Intervening meant that we saved money and improved services. I, and the many others who did the same, couldn’t have done this successfully without GDS’s stress on user needs [2].
What concerns me is that suggestions of a move away from a stress on “user needs” opens the door to the big bad wolf of senior stakeholders/academics/industry figures who think that their expertise means that they can determine the design and approach of services without any need to consider other views or test their approach. I know that this is the last thing that any of the contributors to these discussions wants, but even suggesting that there are things that are more important that user needs risks the self-appointed experts returning. And that brings with it the risk of a move away from user research in government service design.
Maybe I’m being fanciful, but I think some of the problems stem from an overly narrow definition of “user needs” and a misunderstanding of what constitutes value.
Too narrow?
In the original GDS principles, there was an asterisk next to principle 1: “Start with User Needs*”. This asterisk read “*Not government needs.” Whilst there are a lot of good changes that have been made, I think unmooring this principle from “not government needs” was a retrograde step[3]. Removing it has meant that we’ve narrowed the scope of what we mean by “user needs”.
In the original context, I took this principle to mean “listen to the people outside the organisation managing the service above and beyond the people working for the organisation managing the service”. Paul Downey’s illustration below shows this visually with it’s open air Lowry-esque crowd scene, as far away from a boardroom table as it’s possible to get. The first principle also came to support the third principle – design with data – in that researching users and showing evidence for the hypothesis that you wanted to develop was a way of ensuring that you didn’t proceed with the elitist assumption that service operators knew best.
Out of context, “Start with user needs” has been interpreted as meaning “unless you can show an immediate improvement – a sort of verified dopamine rush – for users, don’t bother doing it”. This narrow definition is dangerous because it can be used to justify all kinds of short-termism and blindness to wider consequences. The links that we started with make the consequences of this narrowing eloquently clear.
Keeping the width
In agile approaches, we aim to prioritise on the basis of value. Maintaining an understanding of value that goes wider than the particular flow that one person uses, or wider than the individual immediately using the service, is something that will be familiar to those who’ve been managing – or contributing to – the prioritisation of work for services on a regular basis.
At times, I’ve been tempted to caricature this conversation as the “don’t bother upgrading the server to make it secure because that doesn’t meet a user need” conversation. That’s a caricature because it’s narrow and simplistic, and you find out fairly quickly that forcing every piece of work to link to an immediate improvement doesn’t work.
Prioritisation on the basis of value involves hard work and deep thought because you are frequently making calculations between value in different formats. I’m not intending to list all of the different types of value, but some of the discussions about value that have caused me to reflect are:
- long-term versus short-term value – simply, value that can be immediately realised against value that will take a longer time period to appear e.g. you’d expect a change in an interface screen to show fairly immediate responses but a change in eligibility criteria for a service might take longer to show a trend;
- primary value versus secondary value – I’ve found it helpful to make a difference between a change that delivers value in itself (as in the two examples in the last point) and a change that facilitates further changes (for example, introducing automated testing doesn’t deliver a change in itself, but it does potentially improve the quality of other changes);
- value to the immediate user versus value to wider society: for example, a good privacy policy about sharing data isn’t going to deliver value to the user of that particular service, but will deliver value more widely if it allows trust to build in government’s ability to use and share data appropriately [4];
- shallow value versus deep value – shallow value is value that is immediately delivered but doesn’t lead to any further realisation of value (like water in a puddle) and deep value is value that will lead to a number of further realisations of value (like a mine shaft, sometimes you have no idea how deep this will be). An example of this would be the difference between publishing government data in Excel format (shallow, pleases 95% of users, meets an immediate need) and publishing it in open, machine-readable formats (deep, few users, but potentially significant use in becoming part of automated data gathering/fact checking systems).
These are different types of value but they are all value. There will be conversations about these types of value, how they link to organisational objectives, KPIs and OKRs at a macro level and how they inform the prioritisation for the next sprint at the micro level. These are all conversations that product and other digital people will be familiar with.
How do we get better at this?
With any sort of value discussion, my instinct is that the more complex that value calculation is, the more openly we should be discussing this. (As Scott has noted recently, we should talk more about value). Agile approaches usually suggest that this should happen within our teams, so that the entire team takes part in the discussion and understands why we are delivering what we are committing to.
This approach, where the entire team takes part, helps ensure that not only that people understand why we’re doing a particular piece of work but that there’s a forum for multiple voices to be heard. I’d suggest that this conversation about value should also happen outside our teams, so that there’s a check that we’re not getting high on our own supply instead of making sure that we understand the full breadth of our service provision.
We will, inevitably, sometimes fail at getting our approach to delivering value right. Being too narrow in our definition of value is more likely than being too wide, because we all like to see that the changes we’ve made and invested in are making a difference for people. When we fail to see or include any type of value in our prioritisation then we’ve missed something, and product people need to help everyone re-focus.
Let’s make things better for users
I’m loathe to let go of “start with user needs, not government needs”, because I think that it is a necessary corrective against our intrinsic instincts. In other words, losing a stress on “user needs” makes it possible for “user wants” to creep in. To make “user needs” work, we need to widen out our conversations beyond the most immediate and apparent needs. That means talking about different kinds of needs and about how we place value on these. After all, people use government services every day without directly interacting with them (street lighting, for example).
It’s not good enough to say that user needs don’t count because a service is different in some way, but neither is it good enough to restrict ourselves to only building services that provide an instant hit of satisfaction to a direct user. As I said on Twitter the other day, “This service makes things better for users” feels like a better starting place than “This service meets user needs”.
[1] To be fair here, you should read this statement in the light of the preceding sentence: “But the learning is certainly not all one way: policy officials are adept at managing the often delicate and contentious trade-offs between different public interests in a way that a more singular focus on ‘user needs’ can struggle with for activity outside of simple public-facing transactional services.”
[2] It’s true that there were plenty of us advocating such approaches pre-GDS but the primacy that GDS placed on user needs and, equally importantly, the assessment regime it placed around them, did open doors and shift conversations across government.
[3] I’d agree that the asterisk was sub-optimal from a content design perspective. “1. Start with user needs, not government needs.” with no asterisks anywhere would have been fine with me.
[4] An alternative version would be: “the short-term satisfaction of ‘meeting user needs’ for drugs, whereas – if offered the choice – users would rather not have become sick in the first place”
Recent Comments