The last couple of times I’ve prepared teams for service standard assessments, I’ve put together a presentation to help explain the process. This has covered the background to the service standard and the main areas that I’ve seen cause confusion or problems. I don’t intend to cover the background here [1] but I thought I’d list the areas that can cause issues, in case that’s useful for others.
I should point out that these are personal – and not official – views, and that nothing I can write is anywhere near as helpful as going to observe an assessment before going through one yourself. The assessment teams at GDS and in departments are, in my experience, happy to help set up opportunities for people to do this where they can. If you’re looking for other preparation, then think about doing a mock assessment for your team with some volunteer panel members. This presentation also has some helpful detail.
Whether you’re observing, participating in a mock assessment, or being part of one: here are 10 things to watch out for.
1. Frame it
The single most important thing you can do in an assessment is provide a clear view of what you’re asking the panel to assess. You should be able to sum up your service in a single user story. (If your service has different parts for different audiences, e.g. a back office function for internal users and an external front end, then you can have a user story for each part.) Make this as simple as possible, ideally simple enough that the panel will remember it later.
If you’ve got ideas for future development or experimentation or anything that you’re not entirely sure about, keep that for later and keep the scope tight at the beginning. The panel will have no background in your service beyond the briefing you’ve provided, so whatever you define your service as to start with will set the agenda for the rest of the assessment. If scope looks uncertain or unrealistic at this point, then you’re going to get more questions and more confusion.
2. Be aware of context
Whenever I’m lead assessor, I try to remember to do two things. One is to tell the service team that they’ll get a chance at the end of the assessment to add anything that they remember midway through the assessment. This is because I’ve been on the other side of the table as well, and I usually work out what I should have said in answer to a question 30 minutes too late. The other is to set the context for the assessment panel before we go into the assessment room. Broadly, my summary is that:
- Alpha assessments (to allow a team to proceed to private beta) should be checking whether the service team have experimented, prototyped and tested enough different options, and have done enough work on the difficult or risky areas [2] in the service in order to have a solid foundation [3] to build on in the next phase;
- Beta assessments (to proceed from private to public beta) should be checking whether there is anything about the service presented that could cause problems or embarrass government when it is made available to everyone;
- Live assessments (to proceed from public beta to live) should be checking “can we shut down the old service without any risk?” Or “can this new service take the strain of being the sole digital route to access this service?”
This, in a nutshell, is what you need to do a good job of convincing the panel about.
(And one more bit of context: for anything except Alpha assessments, you should have the details of a previous assessment. Play close attention to the recommendations in this, as those will be the questions uppermost on the minds of the panel.)
3. Don’t filibuster
In nearly five years of chairing assessment panels I think I’ve only ever once had a team who haven’t over-run on the initial presentation. To be fair, sometimes that has been because I haven’t done my job in holding back the panel from asking questions until the end of the presentation, so the blame here is shared.
In this, and in the rest of the assessment, it’s tempting to add as much detail as possible. Don’t do this. You’re not going to make the time available for the assessment longer. Any filibustering therefore reduces the time that the panel have to get good evidence recorded against all of the points that they need. [4]
4. Assessments are verbal
The verbal approach of an assessment can surprise people who are used to other approaches. For a service standard assessment, you do NOT need to print things out, write them down, or create a PowerPoint presentation for every single point of the standard.
You DO need to be able to talk about your service, so take some notes to remind yourself about key facts if you need to. A few single slides in PowerPoint can help [5] but any other documentation is probably not going to be used. (Unless, that is, the process of writing things down helps you prepare to talk about things. If that’s the case you can write as much as you like, as long as you remember that it won’t be used in the assessment.)
5. Assessments assess the past and the future
Assessments do two things: they check that the work you’ve done in the last lifecycle phase gives you the foundation you need to proceed to the next phase, and they check that your plans for the next phase present no risks. This means being aware of past work and future plans. This can confuse people, particularly as other assessment approaches typically only consider one of these aspects.
There are times in my assessment career when I’ve come across teams who have forgotten one part of this equation. For example, I have seen teams present some superb work on an alpha but have no plans for how the private beta will be run. Don’t forget to plan for the next phase; that plan is as important as the work you’ve done in the last phase.
6. Don’t lie
I’m not saying this because of some naïve idea that the world would be nicer if we were all honest; I’m saying this because I’ve been to a lot of assessments, on both sides of the table. Whilst I’ve seen some better-than-good attempts at pulling the wool over the assessment panel’s eyes, I’ve never seen this end well.
In part, that’s because getting a team of five people aligned to the point where they can go through four hours of talking without slipping up is difficult. Mainly, it’s because a panel will become concerned the moment that there is any hint of obfuscation or awkwardness. The risk of one misplaced falsehood is that the panel stop believing everything else you’ve told them, and that’s a big risk to take.
If there’s a problem with something, it’s better to be upfront about it and acknowledge it.
7. Aim for narrative over perfection
The panel aren’t expecting perfection. What matters is that you understand where you are and have taken responsibility for working out what your next steps are and where you need to be by the end of the next phase. Can you explain the narrative of where you started, where you’re currently at and where you’re aiming to get to? If you can, you’re in a good place.
For example, the service team will usually get asked a question about deployments at Beta assessments. Compare these two responses, both of which talk about the same current state. Ask yourself which would give you the confidence to award a pass if you were on the assessment panel.
i. What about deployments?
“I don’t really want to talk about deployments because what we’ve got works fine”
“Do you have downtime?”
“Yes, we usually have some downtime when we deploy”
“How much, typically?”
“It depends”
“How much downtime did you have for your last deployment?”
“Half a day”
“So, there was no service available to your users for half a day – do you have plans for reducing this?”
“No, there’s not much we can do about it.”
ii. What about deployments?
“At the moment, we deploy fortnightly, with some downtime at deploy – ranging from a few minutes to several hours depending on how big the changes we’re deploying are. This is a big change from the previous practice of quarterly releases. We advertise the downtime to users ahead of the downtime, and provide a link across to the old service when this is happening in case anyone needs it urgently. We know this isn’t ideal, but we think it’s manageable in beta when the old service is still running. We have plans for introducing zero downtime deployments with the infrastructure changes that we will be investigating in this phase and we would expect to demo these to you when we return for a live assessment.”
8. Get the timing right
I know that the assessments team do try to ensure that services are ready for assessment before they come in. This doesn’t change the fact that the people who know the most detail about a service are the service team themselves. There is nothing more depressing as an assessment chair than taking a huge amount of resource in getting everyone together to assess a team doing brilliant work and then having to issue a “not pass” because a team has come in too early and aren’t ready for the next phase.
Having to tell your senior stakeholders that you’re delaying an assessment is a difficult conversation but – and I’ve done both, so trust me here – it’s nowhere near as difficult as telling them why you got a “not pass” result from your assessment. Be honest, and ask yourselves whether you’re ready: you’re the people in the best position to make the call. If you’re not absolutely sure, then have the awkward conversations now rather than later.
9. Don’t jump too far ahead
There is only one question that I’ve seen stop an entire service team and render them speechless at more than one assessment. That question is “why is this not a live service assessment?” when it’s asked at a beta service standard assessment (“why is this not a beta assessment?” at an Alpha assessment is similarly problematic).
The reasons why teams leave Alpha too quickly and jump ahead to the false certainty of Beta land rather than taking the opportunity to explore different options, and how this causes more problems than any other single factor, is for another blog post. What matters here is that you plan and present on your plan based on the next phase – and not the phase after that, however tempting it might be. If you turn up at a beta assessment and start talking about plans to turn the old/existing service off next week, then be prepared for some tricky questions about why your next phase isn’t actually Live.
10. Listen
Assessments can be tense affairs. There can be a lot riding on them and a lot of pressure from important people to secure results. Many teams undergoing assessment are suspicious about the whole process, especially if it’s the first time they’re going through it. In the middle of all of this, it’s easy to forget that the best part of an assessment is that you get access to a room full of experts [6] who are devoting quality time to giving you the best advice possible.
If you’re to make the best of this then you need to be able to accept feedback without instantly thinking about how you can shoot it down. The best service teams in assessments are the teams that are confident and self-aware enough to be able to take a point that they weren’t expecting, consider it and respond thoughtfully. It is OK to respond “we hadn’t thought about that, but we definitely will take a look” and this will not normally hinder your chance at passing. (Unless you’re at a live assessment and you’re talking about security, when you probably are in a spot of bother.)
This matters even more if you don’t pass first time. The recommendations are created by the panel with every intention of trying to support and empower your team. In the event of a “not pass”, those recommendations aim to support you in getting your team through a reassessment as quickly and as painlessly as possible. Although a “not pass” seems like a huge disappointment, it can be a way of protecting your team from launching something that’s not ready, sometimes due to no fault of the service team. The best way to get past it is to pay as much attention as possible to the report recommendations.
Good luck!
[1] If you’re interested, then I’ve written before about how important I think the service standard is, and I’ve also written about some parts of the service standard.
[2] For some services, risky areas will be infrastructure. For some, the risky part will be policy. And for some it will be other aspects, and that’s fine – all services are different.
[3] For “solid foundation”, read “won’t have to return to this phase again for a while” (all good live services will return to do discovery and alpha on new features at some point, but you don’t want to start cutting production code only to have to reset and return to Alpha at this point)
[4] It also means the lead assessor has to be rude and interrupt you, and I don’t like being rude.
[5] The following slides are useful:
- user types/personas;
- user research programme (number of tests/users etc);
- team structure;
- picture of working area;
- single slide of top-level architecture;
- single slide of anticipated volume of next phase;
- optional photos of any engagement with stakeholders/ministers/pop stars.
[6] And a lead assessor, whose main job is to make sure that you all finish on time and get a toilet break.
Recent Comments