Discovery is the first phase in the government service standard. “Discovery” is also used to refer to exploratory, speculative work undertaken regularly and regardless of lifecycle (alpha/beta/live) phase. Using this second definition, Jeff Patton suggests that development and discovery work are two, equally important, tracks in the work of a multidisciplinary team.
What’s the best way to help your team succeed in undertaking regular discovery work? These are my five suggestions.
1. Give every discovery story a time limit to avoid the rabbit holes
With a standard development story or task, you’d create some sort of acceptance criteria to define when that story or task is complete. With discovery work you don’t know what you’re going to discover so you can’t write acceptance criteria in this way. If you try, you usually end up writing something meaningless like “finish investigation”.
What you can do is set a time limit for the work. As discovery work inherently runs the risk of disappearing down a rabbit hole – or a series of them – this is critical. Don’t try to set acceptance criteria but make sure you set a time limit for discovery stories. Failing to do this could mean that you never see the people involved in the story/task ever again.
2. Separate and label discovery stories so that it’s clear that they’re different
By definition, discovery stories look a bit short of readiness next to a development story. That’s fine, but you don’t want them affecting your development stories and dragging the quality of them down. They’re different categories with different demands, so it makes sense to keep them in a different place. That different place could be a separate column, a separate backlog or simply a separate label.
Keeping your discovery stories separate allows you to ensure that you have both discovery and development stories ready for planning purposes. It also means that you can have different standards (e.g. Definition of Ready) for how they’re managed.
For example, maybe different people in your team need to review discovery stories and your user researcher and designer/UX people need to review discovery stories but development stories get reviewed by your tech lead and/or developers.
Accepting different approaches within one team like this can be liberating because it frees people up from being part of conversations that they’re not able to input into without making them feel like second class citizens. A UXer/interaction designer isn’t likely to have many thoughts on a tech debt story but is likely to have some valuable input on a discovery story.
3. Specify expected outcomes for each story to make sure that you get what you want
Discovery work should end in one of three outcomes, and the team should be aware of the outcome:
- Kill it: the idea does not have a realistic chance of delivering value so no further work will be done on it;
- Implement it: the idea has a great chance of delivering value so you create a development story (or stories) to do the work to deliver it;
- Keep investigating: the idea might be able to deliver value but further discovery work is needed to ascertain if this is the case, so a further story or stories are added to the backlog.
The order of these three outcomes is deliberate. If you’re doing discovery work properly, then you should be killing more than you implement or keep investigating. In a Government Service Standard Alpha assessment, the panel will look for approaches that have been considered and discarded. If the service team can’t demonstrate any of these, then the panel will be very suspicious that enough options have been considered for the service to progress to the next stage. You should similarly be suspicious about how many options you’re really investigating – or the level of scrutiny you’re exposing them to – if every option you investigate turns out to be successful.
The final option – keep investigating – should be the least common option. It does sometimes happen that an additional prototype or a further bit of testing can be useful. It’s far more often the case that you need to take a decision and move on, or to split the story to retain focus. If you keep deciding to keep investigating, something is wrong. It may be that you should ask whether you’re being specific enough about the discovery work in the story that initiates it, or whether the problem being looked at needs more resource to resolve (a design sprint, for example).
4. Provide psychological safety to allow your team space to experiment
As we said earlier, unless most of your discovery stories fail to progress to development, then you’re doing something wrong. Failing this often and in this way is valuable and prevents wasting large amounts of development time later on in the process. As Jeff Patton says; “one of the most time consuming and expensive ways to learn is to build potentially shippable software.”
Failing like this is emotionally hard. People need the safety to be able to say “the result of this is that we should never ever do it that way” and know that this will be received positively and not as a negative professional judgement on the person who proposed it.
Janna Bastow suggests that phrasing discovery work as “I bet” gives you permission to fail because:
As a product manager, you’re going to be wrong, a lot. ‘I bet’ gives you permission to fail and try again. After all, it wasn’t you that was wrong – it was your hypothesis. And now that you know what doesn’t work, you’re free to test the next bet.”
5. Remove bias and challenge assumptions to gain the best insight
Even with psychological safety, all teams will tend to do too much to prove that an idea can work. This reflects the observer, or Hawthorn, effect; where individuals modify their behaviour when observed. If we’re observing or involved in an experiment then we’re likely to try and make it work.
The least we can do is to be aware of this. To help avoid this, do everything you can to invite intelligent challenge at an early stage. For example, it’s sometimes possible to involve members of other teams who might not have the same commitment to our ideas to provide some balanced feedback.
Dave Snowden also suggests doing multiple pieces of discovery work at the same time:
the critical thing to avoid Hawthorne effect is multiple small experiments in parallel. I don’t do one thing to see if it’s right.”
Do you have any other suggestions for managing discovery work? Please add a comment if you do.