Measure the thing: continuous improvement through continuous user feedback
How to use simple questionnaires and UX metrics to continuously improve content and forms, and raise the profile of UX with decision-makers.
Background
UX and analytics
One of the UK Government Digital Service’s catchphrases is ‘show the thing’. The ‘thing’ being the research, prototype, content, or service they are working on. The ‘show’ is the presenting or demonstrating the actual thing. The point of doing this is to build a shared understanding with team members and decision-makers of the thing they are building. This removes the ambiguity and abstraction of simply ‘talking about the thing’.
I’ve applied this principle to measuring online content and form performance, hence ‘measure the thing’. It’s often a battle convincing decision-makers that something isn’t working that well, and that we should do something about it. I found that capturing and visualising content and form performance data also removes abstraction and builds consensus around what is and isn’t working.
Quantitative data alone won’t tell you why something isn’t working or how to fix it. But numbers — coupled with quotes from actual users — get attention, and provide you with a platform for action, be it more user research or experimentation.
Just enough research: Because even Rocket Surgery is hard sometimes
User research techniques, such as lab-based usability testing, are now well established. But they’re far from universal. Meanwhile, interest in analytics — especially dashboards — continues to grow among the management class. You’d think this was a marriage made in heaven: user research = informed product iterations = improved performance numbers = more user research, and so on. What gets measured gets managed, right? Maybe.
Misconceptions of cost-benefit still inhibit user research’s adoption. A lack of understanding and clarity around appropriate metrics, and a balkanization of digital disciplines, has estranged UX and analytics. This is despite leaders in this space, such as Jeff Sauro and David Travis, writing extensively on the intrinsic relationship between UX and analytics.
Steve Krug’s two seminal UX books, Don’t Make Me Think and Rocket Surgery Made Easy, have gone a long way towards dispelling the myth of usability testing being money and time-consuming exercises. “Avoid big honkin’ reports” and “a morning a month, that’s all we ask” Krug urges.
But even this is a stretch for many people. Think the web team of one. The marketer juggling campaigns and website content updates. The UX evangelist who, despite their CEO mentioning ‘customers’ at every opportunity, can’t get customer research funding. And that’s before we even consider the HiPPO’s fear of their assumptions crumbling in the face user feedback.
A more hands-off approach is unmoderated and remote user research using tools like UserZoom and Loop11. These can save time and money in recruitment, lab booking, and facilitation, but still require serious planning, coordination, and analysis.
What gets measured gets attention
“I don’t know what the hell this ‘logistics’ is… but I want some of it.”
World War II US Navy Admiral E. J. King reportedly said this in response to US Army General C. George Marshall’s emphasis on planning the movement, equipment, and accommodation of his troops.
I’ve heard more or less the same thing from many decision-makers in recent years when it comes to analytics. They know they are important; they’re just not sure exactly what they are or how they should be used. But they want some of it.
Use this to your advantage. Decision-makers’ interest in measuring things is one of your best opportunities for getting their attention on the user experience. A veritable UX Trojan horse, if you will.
So tell me why can’t this be love?
As someone with a professional interest in both UX and analytics, I’m always surprised (and dismayed) there isn’t more cross-pollination between the two practitioner communities. I rarely see the same faces at UX and analytics meetups and conferences.
To me, this makes no sense. UXers use metrics like Task Completion Rate and System Usability Scale to measure product performance at the prototype and development stages. But too often measurement starts and ends at the lab door.
I suspect this comes down to the following misconceptions:
- UX is most interested in qualitative insights. Analytics focus on the what, and not the why.
- Analytics is the domain of ecommerce. Metrics like conversion rates and Revenue Per Transaction aren’t as important to government and NGO UXers.
- Decisions-makers often care about the wrong metrics, such as ‘hits’. As Gerry McGovern rightly puts it, HITS stands for How Idiots Track Success.
- True insights only come from observing or listening to users.
There’s more than a grain of truth to these perceptions. Too often analytics is abused by banal vanity metrics. And nothing does beat watching someone use your product or service to identify UX issues and build empathy.
But there’s another factor. Even the well-equipped digital teams can’t test everything all the time. Once a product has been birthed, user researchers are often assigned a new product or feature. This leaves the existing website or app vulnerable to the ‘launch and leave’ approach. I’ve seen it happen many times.
Continuous insights lead to continuous improvement
What is needed is a way of continuously capturing, reporting, and analysing meaningful user insights and performance at minimal cost and effort to you and your users. Not to replace more in-depth user research, but to establish an ongoing, meaningful feedback mechanism. A means to:
- identify trends and themes early
- be alerted to obvious issues
- respond with swift action
- know where to dig deeper with dedicated research when issues are more nuanced.
Generally, most websites can be split into two main elements: content and forms. By focusing on measuring these separately — rather than a nebulous combination of the two — we can better identify issues and isolate good and bad performance across the website.
For forms, we should measure the performance of individual form steps in a way that lets us build a picture of the form’s overall performance. Similarly, for content, we need a way to measure individual web page performance that then tells us the performance of topic or site areas, and website content overall.
To measure content and form performance objectively, and to gain meaningful feedback, what and how we ask users is crucial. The approach I take for both content and forms is similar, even if the questions are different.
Let’s start with measuring online form performance.
How to measure form performance
Although measuring form, shopping cart, and checkout performance isn’t a walk in the park, it’s relatively simple compared with measuring content performance. Here’s why.
Forms have a clearly defined objective (successful submission), as well as a start and end-point, so it’s relatively easy to measure their effectiveness, efficiency, and satisfaction.
In fact, there is an ISO standard that’s ready-made for forms:
Part 11: (1998) …the extent to which a product can be used by specified users to achieve specified goals with effectiveness (Task completion by users), efficiency (Task in time) and satisfaction (responded by user in term of experience) in a specified context of use (users, tasks, equipments & environments).
Applied simply to a form:
- Did users who began the form finish the form?
- How long did it take them?
- How satisfied were they with the process?
The first metric — effectiveness — is the most important one. You want people to actually complete the form.
The second metric — efficiency — is useful to track and improve. The longer it takes to transact, the more likely the user will become annoyed, bored, or distracted, then abandon. Your users’ time is precious.
The last metric — satisfaction — is attitudinal and cannot be measured by standard interaction data. It requires you to ask the user to rate their experience.
Measuring form effectiveness
If you have Google Analytics on your website, it’s a fairly straightforward task to set up goals and funnels that visualise how many users start then complete or abandon a form (effectiveness), and at which step they abandon.
KissMetrics has a good video tutorial on how to set up goals and funnels.
Once you have a goal and funnel established with data coming in, your funnel will be visualised in Google Analytics something like this:
Further reading: Google Analytics Goal Flow: How Visitors Really Move Through Your Funnel — LunaMetrics
Measuring form efficiency
If your form is only one page long, or has discreet URLs for each step, Google Analytics will display ‘Avg. Time on Page’ to complete the form and each step (efficiency).
You can access this data in Google Analytics via Behaviour > Site Content > All Pages or Content Drilldown. You can then aggregate Time on Page for each form step to get the overall average time to complete the form.
If your multi-step form does not have discreet URLs, it becomes trickier, but not impossible to track time to complete. Ideally, you should implement a combination of a data layer and Google Tag Manager (GTM) on your website and form. GTM can track page identifiers, such as the Title or H1 tag, and categorise these logically in your Google Analytics property. Note: This will take some work and technical knowledge. You will need to enlist your developers and/or a dedicated web analytics agency.
Measuring form satisfaction
As for measuring satisfaction, forms are a means to an end, so asking users “How satisfied…?” isn’t appropriate. I don’t know about you, but I’ve never found a form ‘satisfying’. An alternative recommended to me by Jessica Enders is the Single Ease Question (SEQ). Users are asked to rate on a scale of 1 to 7, “overall, how difficult or easy was the task to complete?”.
The SEQ is a perfectly suitable proxy for satisfaction. It can be tweaked to be more specific to the task at hand. For example, ‘overall, how difficult or was this form to complete?’.
Because you’re capturing the user’s attitude, be sure to add an optional second question asking why they gave the task that rating. Without this context, you’ll have no credible answer when a decision-maker or client asks you why.
To collect the SEQ and feedback data, you need these fields to integrate with a database from which you can export the data — either through an API or .csv file. The database can be something simple in simple lookup table through to an entity within your enterprise CRM or data mart.
Where there’s an appetite, there’s a way
If your organisation has more of an appetite to invest in analytics than user research, I recommend:
- Implementing Google Tag Manager (GTM) and data layer
- Establishing user-friendly dashboards for your key metrics.
GTM will help enable the dashboards, and the dashboards will help build your case for greater UX investment. But more on that later.
The form metrics can be tracked more robustly by using a data layer and GTM. GTM expert, Simo Ahava, provides detailed instructions on advanced form tracking in GTM.
Visualising form metrics
Now that you have the data, the next step is to visualise and report it in a way that decision-maker can quickly and easily understand and act upon.
Cue performance dashboards.
Google Analytics’ standard visualisations and dashboards are quite limited. Google have acknowledged this with the recent release of its dedicated data visualisation tool, Google Data Studio. It’s free and Google are enhancing it continuously.
There are a host of other data visualisation tools on the market that integrate with Google Analytics, such as Tableau and Power BI. They come with a price tag, but your organisation may already be using or have licences for these, so ask around.
What a good form dashboard looks like
A good dashboard tells a story at a glance. One of the best all round visualisations for telling a story is the trend graph. This is because it tracks the progress of your metrics over time. You can see if any changes or intended improvements have had positive or negative impacts.
The UK Government monitor form (service) performance in their beautifully designed performance platform. The platform includes dashboards for UK Government digital services.
A great example is the Practical driving test bookings dashboard. Among other useful metrics, the three ISO measures are displayed in interactive trend graphs:
These metrics are an example of explanatory data. Set up the same dashboard for your own form, and most people in your organisation would quickly understand the story it is telling. If the story is a bad one, that’s an opportunity to push your case for funding deeper user research, more design resources, or whatever it is you need to improve your form’s performance.
If your office has a large screen available to display your dashboards, use it; it will ensure they become a talking point and build interest in your UX efforts.
But before that, you should analyse your exploratory data.
Exploratory data: Finding out why
As a UXer or performance analyst, it’s your job to investigate why your form is performing the way it is. Before rushing off to put together a research plan, let’s delve into some exploratory data. As with any data analysis, look for patterns and themes. They won’t always be conclusive, but they’re an avenue of enquiry for deeper user research.
User feedback
The best place to start is the user feedback given in the ‘comments’ field of the form success screen I discussed above. I’ve found people to be surprisingly forthcoming in telling you what they found good and bad about your form. Collecting this data anonymously and immediately after their task (lodging the form) no doubt helps to elicit this.
The Single Ease Question rating provides a quantitative supplement to the qualitative data. It’s a robust gauge of sentiment that, if positive, you can quickly put in front of decision-makers to engender confidence in the approach taken to get it to that point. Conversely, a poor average rating — combined with negative user quotes — can help encourage buy-in for change.
Other performance indicators
What users often won’t say — or at least accurately describe — is where they had problems in a form. Thankfully, Google Analytics can help identify which form steps are causing the most friction.
Three of the strongest indicators of form issue are:
1. High exit rate — users abandoning your form at a particular step is an obvious sign of major friction.
2. High average time on page — users spending longer on a particular step can suggest several things, including too many questions, difficult to answer questions, or technical issues. Taking up too much of users’ time can lead to satisficing, and lower completion and satisfaction rates — people have better things to do with their time!
3. High validation error rate — validation errors occur when a user enters an invalid response, or does not respond to a mandatory question. For example:
Validation errors are inevitable, but if you’re seeing a higher than average number occur in any one step, this could suggest your questions are unclear and/or your validation rules are too strict.
You can configure Google Tag Manager to track validation errors as Events in Google Analytics. These can be tracked at the page or field level. Be mindful that having tracking events fire on every field may negatively impact page load speed. Simply tracking the number of times validation errors occur on a form step — regardless of which field(s) they occur — can provide sufficient indication of UX issues.
Visualising form exploratory data
Because you’re comparing three metrics against each form step, you’ll probably find tabulating this data the best way to visualise and analyse.
Here’s an example of how this could look. Note the conditional formatting on the cells to aid identifying performance issues (most dashboard and reporting tools conditional formatting functionality). For example, if a step metric is above the average for all steps, the tool formats the cell pink:
Other form metrics
I’ve deliberately kept the focus of this post on UX metrics. But there are a couple of others that you should consider reporting:
- Take up (number of people using your form). This a basic metric and is obviously vital for measuring demand for your product or service.
- Cost per transaction (the cost to build and maintain this form). This is useful for measuring the return on investment of your online form, especially if it has replaced a paper or phone-based service.
The factors affecting these two metrics are entirely contextual to your organisation and industry, so I won’t delve into why.
I haven’t discussed ecommerce metrics, such as conversion rate and revenue per transaction. They’re a whole other ballgame influenced as much by marketing techniques — such as pricing and promotion — as they are by user-friendly design.
If you do manage a website that relies on conversions and revenue, you’ll find that improving your UX metrics first should have a positive effect on your ecommerce metrics.
Similarly, I haven’t tackled technical performance metrics, such as page load and server response times. However, it goes without saying that your UX metrics — and ultimately your ecommerce metrics — should improve with a reliable, fast-loading form.
How to measure content performance
As I mentioned, form performance is relatively simple to measure. They have a clearly defined objective (successful submission), as well as a start and end-point.
Content, on the hand, is a much messier proposition. Especially in non-commercial contexts.
You can test navigation and information architecture effectiveness by using tree testing tools like Treejack. But how do we measure the effectiveness of the content within the pages?
If your content’s purpose is to persuade users to make a purchase, then Google Analytics and A/B testing can help identify which content or content changes are increasing ecommerce metrics, such as sales conversion rate and average order value.
But what about content that doesn’t exist to sell something? Educational content? Basic facts? General advice? Specific instructions? Information for offline contexts?
These are all common to government websites (which are what I currently work on), but they also exist on commercial websites in the form of help and support content. Even the biggest online retailer, Amazon, has extensive content designed to help, not sell:
What does good look like?
Let’s start by defining what we are trying to achieve with this type of content. Generally, I aim for two things:
- user comprehension and
- user confidence.
In other words, the user understands the content and can act on it confidently.
Now, as for measuring it, Google Analytics will tell you practically anything about how and when users are interaction with your website. Except why. And whether users could find the information they are looking for. And whether they understood and acted on it confidently.
Even Avinash Kaushik, Google’s Digital Marketing Evangelist, will tell you this:
There are certainly tertiary ways in which you can answer this question using Google Analytics…but the best way to answer this question? Ask the visitors!
Avinash also shuns the dreaded endless survey questionnaire; these are hard to take at the best of times, but especially so when you’re trying to do something else!
Where I don’t agree entirely with Avinash is with the method for asking users. In this article — Web Analytics Success Measurement For Government Websites — Avinash recommends implementing a questionnaire that pops up when the user leaves the website. This is problematic on two counts:
1. It interrupts the user from what they were doing and is likely to annoy them. The irony of this is you’re hurting your website UX by asking about your website UX!
Capturing data about something changes the way that something works. Even the mere collection of stats is not a neutral act, but a way of reshaping the thing itself.
Alexis Madrigal — The deception that lurks in our data-driven world
2. Asking people to:
- reflect on their primary purpose for visiting the website
- whether they could complete this purpose, and
- rate their overall experience with a website
is asking a lot of your time-poor users. The questions leave the user to recall the specifics of pages they may have since left and to provide a response that is both accurate and meaningful to the analyst.
If the questionnaire appears randomly, when is ideal time for it to do so? For example, at the time of writing, users spent an average of 15 minutes on the Consumer Affairs Victoria website. Should we make the questionnaire appear after three minutes? Ten minutes? Fourteen minutes? Or after a set number of pages? What if the user is still browsing and hasn’t yet finished the task they came to complete?
This is inviting annoyance, confusion, and thus questionnaire abandonment or rushed or skewed responses.
These types of questionnaires can be useful for teasing out general website issues or sentiment, but are less effective for identifying specific page or content issues.
So, what instead should we ask and how? The answer has probably been staring you in the face, or seemingly following you around the web…
Was this page helpful?
I began noticing this question at the foot of web pages in about 2011. Over time, this questionnaire — or variations thereof — have become commonplace. But I first noticed them on the websites of the biggest names in tech.
Microsoft
Apple
If you use one questionnaire on your website, use this one.
What appealed to me was the questionnaire’s specificity:
- It referred only to the page the user was on, so there was immediate context. It was even placed at the end of the page, so the user had to at least scan the page before rating it.
- It was discreet and respectful of the user — no modals, overlays or pop ups — to interrupt the user mid-task.
- It asked one question and with only two options: yes or no. No star ratings, no fence-sitting. The user has to decide whether, on balance, the page delivers on its promise of meeting their need. In 2010, Google replaced YouTube star ratings with a binary thumbs up/thumbs down rating because most users mostly rated videos as 1 or 5 stars.
- Only once the user selected yes or no did it ask its second (and last) question: why or why not? This was clever. The questionnaire was already short, but not revealing its second half up front helped to reduce the perceived effort required to complete it. This encourages more completions.
- The yes/no would tell us ‘where’ content was or was not working, but the user comments would tell us why.
I figured that if the big tech companies had adopted this questionnaire they were probably getting some benefit from it. We agreed that the worst thing that could happen would be to waste some development time and budget to receive limited feedback. There was only one way to find out.
As it turns out, the number and quality of responses exceeded our expectations. In the first 12 months, we received on average 40 responses per day. Five years later in 2017, the number had risen to 80 per day.
Most importantly, most feedback is specific and actionable. Because the comments are almost always about the page on which the feedback was collected, it makes it easier to identify the issue and adjust accordingly.
What we do with the data
Collecting the data
For our ‘was this page helpful?’ form, we collect the data in a simple database within our CMS. But there are plenty of other options, such as a cloud database in AWS or Azure, or directly into your CRM of choice.
Accessing the data
To access the data, we have a web-based, password-protected interface that lets us filter the data by date range. It also lets us export this data as a CSV file.
Each day, I export a CSV file of the previous day’s data and paste it into a master spreadsheet. I do this because it’s easier to sort the data, remove spam, and do all the other nice things that a spreadsheet allows, such as calculations and conditional formatting. In time, I’d like to investigate ways to import the data into a spreadsheet automatically each day.
Enquiries and complaints
A couple of times a week we’ll receive a user enquiry or complaint via ‘was this page helpful?’. We treat them as we do any email or contact form submission. We don’t encourage users to use the form this way, but we don’t punish them for seeking a quick path to an enquiry.
The number of enquiries and complaints we receive this way is minimal and manageable. If users leave their contact details, I’ll forward the comment, date stamp, and URL of the page, to our contact centre to respond.
Because the form has been so valuable, we’ve resisted the temptation of tampering with it by adding explanatory text or links to our contact us page. Overwhelmingly, users understand the form’s purpose.
Analysing and responding to the data
Once I’ve added the daily data into the master spreadsheet and forwarded any enquiries or complaints to our contact centre, I look in the comments for obvious issues I can fix on the spot. These include:
- broken links
- hard to find or missing links needed to complete a task
- poor writing for the web, such as a lack of sub-headings, long sentences and paragraphs.
Not every content problem is obvious, so we need to wait until a pattern emerges in the data. Unless this pattern of feedback resembles a flood more than a trickle, I recommend not thinking too hard about every negative comment at this stage.
Be patient. It may be weeks or months between users’ comments on the same issue, especially in less visited website pages. You won’t have enough data to sort the real problems from the outliers, and believe me, you won’t have enough time to chase every lead. I save this type of analysis for periodic benchmark reports.
Content benchmark reports
Content benchmark reports provide a comprehensive yet digestible mix of qualitative and quantitative data on content performance. It’s all your key data in one place.
The number of periodic benchmark reports you will need depends on your website’s size. A small website of under 50 pages can probably get away with one. For sites of more than 100, it might be best to split reports up by main site areas.
The reports have two main purposes:
- As a tool for benchmarking key performance metrics, so that the impacts of content changes can be measured.
- As an avenue of enquiry for deeper user research.
What’s in the report
Download my Content benchmark report template (xls, 15KB). The following information describes the report’s format and formulas.
I create my reports in a spreadsheet, consisting of three separate tabs:
- Interaction and UX metrics.
- User comments.
- Recommendations.
1. Interaction and UX metrics
The first column lists your site (or site section) page titles in the order in which they appear in the information architecture. You can even indent the page titles to match the hierarchy. Having the data match the site structure helps to spot and investigate usage and problem clusters or patterns. I also recommend hyperlinking the page titles, so you can refer to each page quickly.
Next, add columns for your UX metrics (the order of columns isn’t important; whichever works for you). These include the aforementioned ‘was this page helpful?’ percentage (our most important metric overall), but also several other very useful measures of content performance:
Was this page helpful? — number of ‘Yes’ and ‘No’ responses (separate columns). The raw number of yes and no responses provides perspective. A low or high percentage is less valid when there are few responses. Focus on the pages with more than a handful of responses. These usually correspond with higher volumes of traffic. This is an indicator of your users’ top tasks, which you should be focusing on anyway.
Percentage of was this page helpful? responses — Yes. This is your primary metric for any page. The page is either helping or not helping your users.
Number and percentage of referrals to ‘contact us’ section. A high percentage (above 5%) might be ok for ecommerce website looking to funnel sales leads to a contact number or form, but for everyone else, this is a sign of user desperation. If they need to contact you because your content was hard to find, understand, or act upon, this represents failure demand.
Ratio of pageviews to unique pageviews, or ‘backtracking’, shows on average how many times users viewed a page in the same session. A high ratio (generally above 1.4) indicates users could be lost. Don’t be concerned if users are often returning to a landing or index page. It’s common for users click through from this type of page to complete one task, then return to click through to the next page, and so on. Be more concerned when the ratio is high on more self-contained content pages.
Number of and percentage of internal searches (separate columns). Most internal website searches are made on the homepage. If users need to use internal search once they’re deeper in the website, it’s a strong sign they are getting lost or not finding what they expected to find. A high percentage (above 5%) on a non-homepage warrants investigation. Use Google Analytics to find the keywords or phrases users are searching for from these pages (Behaviour > Site Search > Search Pages). The information they are looking for might be on the page, but they missed it because your wording didn’t match your users’ vocabulary.
Ratio of entrances to pageviews. If this is low for a page getting good traffic, then it could mean that users need the content but aren’t finding it in search results. You probably need to optimise the content to better match your users’ search terms.
Note: to better highlight issues, use conditional formatting for the ratio and percentage metrics. Anything figure above or below the level of tolerance (for example, an internal search percentage above 5%) can be highlighted in red.
No doubt there are dozens of other interesting and useful content UX metrics out there (please add them in the comments!), but those I’ve listed above should provide a solid foundation for any website.
As well as your content UX metrics, add columns for your standard interaction metrics from Google Analytics:
- Pageviews
- Unique pageviews
- Time of on page
- Entries and percentage of entries
- Exits and percentage of exits
- Bounce rate.
These data add context and perspective to your UX metrics. For example, a high percentage of internal searches on a page is usually something to be concerned about. However, if the page is being viewed by just a handful of people, you can turn your attention elsewhere.
I won’t go into further detail about these metrics, but if you want to learn more, Ashraf Chohan of the Government Digital Service has written a great summary in relation to the GOV.UK website.
2. Qualitative data (user comments)
In this tab add the data from ‘was this page helpful?’ for that period into the following columns:
- Helpful — Yes/No
- User comments
- Page URL or title
- Date stamp.
Sort the sheet by page URL/title, so that you can review the comments by page. If you have time, you can then order the pages into the hierarchy as the IA, but it’s not as useful here as it is in the quantitative data tab.
Apply conditional formatting to the first column (green for Yes, red for No), so that you can spot clusters of positive and negative feedback. The comments are where the real value lies, but it can be useful to see if there are any patterns, such as clusters of ‘No’ responses.
Now the real work begins. Filter the data by page URL/page title. For each page, analyse the users’ comments:
- Are there any themes emerging? Users aren’t always consistent or articulate in describing their issue with the content, but many issues are clear once the comments are all in the one place. Start taking note of any insights or themes, because they will form the basis of your recommendations.
- Check the quantitative data to see if backs up the user comments. If, for example, users are saying they can’t find something they expected on a page, look to see if the that page’s search data and referrals to your contact page are high. If many users are saying they can’t find something, you shouldn’t need the quants to justify making improvements. However, the quants may provide useful support if content owners are skeptical of comments alone.
For larger websites with high traffic volumes, you may need to be more systematic in your approach by creating codes (or tags) for themes and calculating the percentages of comments per theme. Further reading: How to code & analyse verbatim comments — MeasuringU.
3. Recommendations
Once you’ve grouped the issue by theme for each page, you now need to prioritise which to tackle first. A quick, simple way of doing this is to compare the number or negative responses with the number of pageviews.
Prioritise pages with higher than average pageviews and higher than average negative comments. This is a simple representation of this approach:
Citizens Advice UK have a more sophisticated approach, which I’d love to put in practice: How to prioritise 3,000 pages? Start with data — Ian Ansell, Digital Data Scientist at Citizens Advice.
Frequency of reporting
The frequency of these reporting periods depends on the volume of data you receive for any given website section. The more popular your content, the more pageviews and feedback responses you’ll receive, and the sooner the benchmark report becomes meaningful enough to produce.
Monthly, quarterly, yearly; choose whichever duration works for that section. The key is to continue producing the report at the same intervals. This will allow you to compare performance consistently from report to report.
Statistical confidence
I’m often asked about the statistical significant of our ‘was this page helpful?’ quantitative data. Statistical significance is important; the higher the confidence level the better. But this type of research is largely for sentiment measurement and exploration of the user experience.
In his excellent article — How confident do you need to be in your research?– Jeff Sauro suggests setting a confidence level of 80% is sufficient “when you need only reasonable evidence — when, for example, you’re looking at product prototypes, early-stage designs, or the general sentiments from customers”. And remember, the most important data are the comments your users provide!
Visualising the data
The point of capturing this data is to use it to make continuous website improvements. Your data visualisation should reflect the impact of changes over time, at which a trend graph is most effective.
I have an overall website trend graph for ‘was this page helpful?’ percentage drawn from the aggregate of data across all pages on the website. I also have trend graphs for each major website section.
My data interval is month, but like reporting periods, adjust this to whichever period works for you. For less visited websites, longer intervals — such as quarterly — may be more suitable.
When it comes to data visualisation, nothing gets decision-makers’ attention more than an upward or downward trend. If you don’t already have decision-maker buy in for more substantial user research, such as usability testing and interviews, then a downward trending ‘helpfulness’ rating might be your best weapon to change that.
Indeed, you should make sure as many people within your organisation see and talk about this data. If the website helpfulness rating is poor, then the conversation will almost always turn to why. There’s your opening.
You can and should also trend graph the other UX metrics, but a graph that says fewer people are finding your website helpful says more than any other metric can.
How automation can save you from data entry hell
If you have a large website with many sections, collecting and inputting data can be very time consuming — especially when you’re the only one doing it! Try to avoid the trap of collecting and reporting so often that you have little time to analyse and action the data.
The best way to do this is to automate the data collection as much as you can. The way we’ve done this is through a combination of Google Tag Manager, Google Analytics, and Google Sheets.
Among many things, Google Tag Manager allows you to measure activity — or ‘Events’ that occur within a web page. One of these Events is the submission of ‘was this page helpful?’ yes/no data. By configuring Google Tag Manager to ‘listen’ for this Event, it then sends yes/no data to our Google Analytics account.
From there, with the help of a digital analytics agency, we set up a content benchmark report in Google Sheets. It contains all of the quantitative data formulas from our earlier Excel spreadsheet, but it can take advantage of the Google Analytics Reporting API. Once connected and configured, we simply need to set the date range we want to report on, and a few clicks later the spreadsheet updates the data.
You’ll still need to add your qualitative data from your ‘was this page helpful?’ database, but you’ll no longer need to manually enter Google Analytics data.
Note: don’t be tempted to capture user feedback in Google Analytics. For one, Google Analytics has character limits, so longer responses will be truncated. Secondly, as mentioned, some users will add their contact details to their responses. Having Google Analytics collect personally identifiable data is a breach of Google’s usage guidelines and probably also your organisation’s privacy policy.
For trend data, I use Google Data Studio to visualise the data. Google Analytics also has API connectors for most big data visualisation tools, such as Tableau and Microsoft Power BI.
In summary
- You don’t need a big budget to make a big difference.
- Don’t burden users with long, interruptive questionnaires.
- Simply add:
- the Single Ease Question and comments field to every form success/complete page, and
- ‘Was this page helpful?’ to every content page.
4. Use quantitative data as:
- a barometer of health
- an avenue of enquiry
- an opportunity to engage decision-makers for investment in user research. Get the data and comments in front of decision-makers as often as possible.
5. Use qualitative data for insights.
6. Don’t fret about statistical confidence — look for patterns.
7. Automate data collection and reporting to free up your time to analyse and action.
In conclusion, I’ll leave you with a quote from Gerry McGovern:
“We need people who know that digital is never done, that it is not a series of projects but rather a stream of continuous improvements.” — Evidence, not ego, must drive digital transformation