the alpha beta live lifecycle

What we can learn from the history of the service standard

I’ve had a lot of conversations with people about the service standard since it was introduced. There’s one aspect of the standard that is hidden from sight and that most people miss. This is that the service standard is itself a service. Understanding this is the key to how the standard works. 

If that sounds a little meta, let me give you a little bit of history about the service that is the service standard to put this statement in context. (Full disclosure: I’ve a bit of personal history here. I worked for GDS during the introduction and initial iteration of the standard and continued as a lead assessor and contributor to updates after I left.)

1. MVP (4 points)

Most people think that the 26 point service standard originally announced was the first version of the standard. Actually, that’s not true. Like all good services, there was an MVP of the service standard used to evaluate the running of the assessment process. This used an initial 26 point standard, but only evaluated services for pass/not pass against four of the points in the standard. The most important part of the introduction of a standard is the early experience of it. 

This MVP approach allowed service teams to get used to the process, assessors to be brought up to speed, and other elements of the process (e.g. how the reports were written, where they were published, timings etc.) to be fine tuned. There were even some changes to the 26 points, as you can see in this early post on the Service Standard from Richard Sargeant in June 2013. And I’m not sure where the content designers were when Mark Mcleod wrote this post, but “meet all 26 criteria” is one of several upcoming changes to make it into bold for when the initial service was due to go live in 2014.  

2. Launch (26 points)

After a year in which the assessments team had run over 50 assessments, the full standard was launched and the full 26 points now applied to services being assessed. (In later times, we’d probably have referred to this as the move from private to public beta as it had moved from a partial application to a full, public service.) 

The standard was, initially, applied to a small subset of overall digital services as new services launched.   It took four months for the first 27 assessments to take place and another year to hit a cumulative total of 100 assessments [1]. 

3. Version 2 (18 points)

The standard needed some additional guidance and training for assessors. It also needed an iteration based on feedback and experience.  The new 18 point standard launched on 29th May, 2015 but didn’t pass Alpha until November or Beta until 2016.  

As a couple of posts showed, there was no let-up in continually testing and improving understanding of the standard. In 2016, when the service went into Beta, there was further work on approach and guidance

4. Iterating to version 3 (14 points)

Once any service is up and running, changes have more impact and take longer. The next update to the Standard – taking it from 18 to 14 points – took nearly two years. The initial blog post was published in September 2017 , then an update a year later in September 2018. Finally, after a huge amount of work including roadshow events with government departments all over the UK (yes, even to Newport where I was working at the time) and other work, the most recent version of the standard was launched on the 9th May, 2019.

For the first time, the standard was titled “the government service standard”, rather than simply the “digital service standard” and for the good reason that it covered ‘whole problems’ rather than just the digital bits. It felt like the standard had finally grown up. 

Why is all this important?

The standard has been created, developed and maintained by a dedicated team of brilliant people who have kept it running for a decade. Anyone who has used the standard, used a service that has been assured and improved by the standard, or adapted the standard for use elsewhere in this time owes them a debt of gratitude. 

I used to joke to colleagues that I was advising on digital work that “a website is for life, not just for Christmas”. By this I meant that you can’t just create a site (or an app, or a transactional service) and walk away: it requires ongoing commitment, budget and time.

a website is for life, not just for Christmas”

Sometimes, when you speak to people about the service standard – particularly in the context of assurance approaches – they assume that it’s simple to set up a similar standard. To start with, you can take the existing standard as a starting place before amending it to suit your priorities and context. It’s not much more than a day’s work to get something workable ready to send out. And that’s it, unless you want to distribute your initial draft for a few comments. 

These people are not wrong: creating a service standard in the abstract is easy. What takes effort is making sure that the standard meets user needs, that it gets tested and iterated and improved in ways that don’t devalue or cause consistency issues, that assessors get trained and supported, and most of all that the standard gets used and used and used again because of the value that it delivers, the problems it ensures get caught before they create a major problem, and the links with other services that get suggested. 

What difference does it make that the service standard is a service? It means that, for the standard to work, you have to continually provide the people, finance, and top-level support to ensure that it continues to evolve, meet user needs, and can respond to new scenarios (like COVID services, like being applied to internal as well as external services, like the increased use of AI, like…). You can’t create a standard, announce it, and walk away; it’s for life and not just for Christmas.

We’ve learned a lot about how critical digital functions need ongoing support – and not the occasional support of people when they have a few hours free, or the short-term burst of work from one person looking for some CV points – to work. Research ops, the management of design pattern libraries, the creation and sustenance of communities of practice, corporate blogs: these are all functions that need people, finance, and top-level support to continue in the long-term. 

The history of the service standard shows that assurance is no different: the service standard is a service. The glamour and publicity goes away after the initial launch but the continual application and evolution of a standard is where the hard work takes place, and where the big outcomes occur. 


[1] I still remember reading that, looking at the list and realising with some surprise that I’d either assessed or prepared teams for assessment for more than 10 of the 100.

[2] If you’re still reading this, then this footnote doesn’t exist, but of course the need for assurance and what constitutes minimum viable assurance are reasonable questions. Just not ones for this blog post. (TL;DR: there is always governance/assurance, it’s just whether you want to be honest about it; and effective assurance is assurance that drives outcomes not assurance that exists to make people a bit more happy about stuff.) 

[3] Whilst I was writing this, Vicky Teinaki also wrote an excellent post about the history of the service standard https://www.vickyteinaki.com/blog/a-brief-history-of-the-service-standard.

Leave a Reply:

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.