3. Qualitative, not quantitative, metrics

[Part 3 of my series on language and service design]

Even recently, it’s noticeable how many of the conversations about services being provided in multiple languages that I’ve been involved in default to quantitative metrics: of the number of people who used this service, x percentage opted to use it in the language of y.

Anyone who has managed metrics or analytics data on digital services knows that quantitative metrics are of limited use when used in isolation to assess the performance of a service. At the most basic level, we have had this many users access this page or this content is meaningless, unless we know why users accessed it or what they did with it. If they accessed it so many times because they couldn’t understand it, or they kept googling it to see if they could find a different source of help and kept finding themselves back in the same place, it’s not exactly an indicator of success. And, of course, quantitative metrics are particularly poor in assessing the performance of public sector services, when there are often no other alternatives to the service.

Qualitative metrics are harder, but far more important. For example, of the people who used the first page of this service and found that they are eligible for the service, x percentage completed the service in one visit, y percentage finished it over multiple visits and z percentage never finished.

If we can manage to move to qualitative metrics for services provided in multiple languages, my hope would be that we can then start discussions about meaningful measures (whether OKRs, KPIs, or other formats). This would allow us to start setting realistic expectations around services. For example, if we’re introducing a Welsh language option for the first time, then immediately expecting a percentage of users that is in line with the percentage of Welsh speakers living in Wales is not realistic: that user base will have become used to interacting with that service in a particular language. They may assume that the alternative is not available, or is of poor quality, or is not official.

Shifting users from one service channel to another is a long process, and language choice is no different. Even new services will have to deal with user perceptions of past provision from government and other sources. As Heledd Evans pointed out in the CDPS session on language, if you can’t get pay and display parking pay stations that work in the language of Welsh in Gwynedd, then the baseline expectation of a Welsh speaking user from public services is, at best, patchy.

As people involved in service design, we need to understand that the best way to avoid the nonsense of headlines about “we spent thousands on providing this service in the language of x and only five people used it” is to be open and realistic about setting targets and expectations from the outset. Channel shift metrics for language shouldn’t be any different from approaches to channel shift between offline and online channels: we wouldn’t expect everyone using a paper system to stop using it and adopt the online system as soon as it was launched, so why would we expect everyone who wants to interact in the language of Welsh to stop using an existing system in the language of English and use the new system provided in Welsh immediately?

Qualitative metrics for language options will help us to set and measure performance; quantitative metrics alone will only ever tell us a small part of the story.  

Header image: Digital Analytics by seoability CC BY-SA 4.0

Leave a Reply:

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.