The best KPIs support your tech comm strategy

The best Key Performance Indicators (KPIs) in tech comm are aligned to measure the success of your documentation strategy.

That’s some advance insight I got from Rachel Potts who will run a workshop about “Developing KPIs” for tech comm at TCUK in Bristol in a few weeks.

Measuring performance

KPIs are “a type of performance measurement to evaluate success… often success is simply the repeated, periodic achievement of some level of operational goal (e.g. zero defects, 10/10 customer satisfaction, etc.). Accordingly, choosing the right KPIs relies upon a good understanding of what is important to the organization.” (Wikipedia, “Performance indicator“)

But KPIs can be tricky! Says Business Administration professor H. Thomas Johnson: “Perhaps what you measure is what you get. More likely, what you measure is all you get. What you don’t (or can’t) measure is lost.” (Quoted and explained in a Lean Thinker blog post)

KPIs in tech comm

Some KPIs in tech comm are also deceptive. To pick a glaring example, measuring grammatical and spelling errors per page is comparatively easy and will probably help to reduce that figure. But one very fast way to improve this KPI is by changing the page layout, so there’s less text per page. Fewer words and more pages lead to fewer mistakes per page – without correcting a single word. Also, the measure won’t improve documentation that’s out of date or incomplete or incomprehensible.

Rachel advised me: “It depends on strategy and purpose: What’s right for one team is completely wrong for another. Measuring errors on the page is only a valuable KPI if the number of errors on a page relates closely to the purpose of your documentation. If there is a close relationship, then that’s a useful KPI!”

Strategic KPIs

So what would be alternative KPIs, depending on particular tech comm strategies?

If your strategy is to make customer support more cost-effective, you can measure (expensive) support calls against (cheaper, self-service) documentation traffic, while trying to align your documentation topics, so they can effectively answer support questions.

If your strategy is to improve your net promoter score and customer retention, you can measure users’ search terms for documentation, number of clicks and visit time per page, while trying to optimize content for findability and relevance to users’ search terms.

If your strategy is to improve content reuse and topic maintenance, you can measure redundant content to drive down the number of topics that have mixed topic-type content:

  • As long as you still have abundant conceptual information in task topics, you probably have redundant content. (Though a couple of sentences for context can be acceptable and helpful!)
  • As long as you have window and field help reference information in task or concept topics, you propbably have redundant content.

What do you think? What KPIS are helpful? Which are you using, if any?

Advertisement

Art vs. online: 2 dimensions of curating

Curating is a cool word, or trendy jargon, for what happens in web technologies and in art museums, but they are fundamentally different activities.

In this post, I want to add an alternative view to Rachel Potts who recently wrote about “When software UX met museum curation“. Where Rachel emphasises similarities, I’d like to focus on the differences, especially as they relate to art museums.

Your artefacts

One serious limitation and difference in curating at art museums, compared to anything in software and online, is that you need to care for original, unique works. If you mount a special exhibition, you need to procure them to begin with. And sometimes you cannot get them, no matter how much you want them in the show to present an artist or an era in history or to make your case.

  • Some works don’t travel because they’re fragile or because the insurance is too costly or because they’re centerpieces in the collection that owns them.
  • Some owners won’t lend works to you, because you cannot satisfy security requirements or because you’re too small a museum or because they don’t like your director.
  • Some works are simply lost.

Of course, you can always do with fewer or lesser works or, in the case of historic artefacts, with copies, but that invariably hurts the critical response and your attendance.

"Stalking Christina" - other people regarding my favorite painting

Your objectives

Another difference is that for many art museums “enabling users to learn” is one objective among many. And several other objectives are, unfortunately, at odds with it:

  • Some pieces are too sensitive to light or touch or movement to allow more than very few people to experience them.
  • Some museums need to please or placate donors (who may influence what’s shown and what not) and trustees (who may influence what gets paid for and what not).
  • Some museums don’t have the means: They lack the manpower to accommodate visitors more than a few hours per week. Or they don’t have the expertise to allow them to learn well.

Your audience

A third difference is that art museums who put on ambitious, critically well-regarded exhibitions find that attendance is surprisingly low. The reason is simple and disappointing: Many people don’t want to be enabled to learn in art museums. They don’t want to learn new things, much less have their beliefs challenged. Instead, many people visit an art museum, because of the way it makes them feel:

  • Many go to be in the presence of beauty or to be awed. Hence the success of any show whose title mentions a best-selling artist or any of the words “Impressionist”, “Gold” or “Gods” – even if the title is far-fetched and the show mediocre at best. “Dinosaurs” gets kids, and anything that flies or shoots gets their dads.
  • Some go to feel cool. Hence the success of after-work parties in modern art museums.

The words

Roger Hart once told me, it’s futile to try to stop linguistic change. And the web is a great change agent of language:

  • How many kids today know that women warriors (or a river) gave their name to an online store?
  • The German language has known about “email” for centuries (though we only spell it thus after a recent change in orthography); in English, it’s known as “enamel”.

But if language is to represent the real world, I advocate to respect the differences within one word, such as curating. Conflating two similar activities into the same word cheapens our experience of the stuff that surrounds us.

Improving documentation with web analytics, Rachel Potts at TCUK

Rachel Potts (@citipotts) held a 3-hour pre-conference workshop at TCUK, showing and instructing us how to use web analytics (such as Google analytics) to monitor and improve documentation. We went through three use cases, you can

  1. Answer specific questions, for example, how often was a certain page viewed. This is the simple, obvious use case where you just look at one or few numbers without having to do any data tringulation.
  2. Measure strategic process. This means to analyze available data top-down to find out whether documentation serves an intended purpose. For example, if exception handling pages in the online help have feasible numbers for page views and time on page, while customer support calls for the corresponding issues decrease, your strategy to offer self-service error recovery in online help probably succeeds.
  3. Measure content use. This means to analyze bottom up to find out what new insights can be teased out of the available. Take an online shop, for example. If you relate search terms with the products that were ultimately bought in the same visit, you can start to build a glossary of products which your customers relate to one another. This is not the well-known “Customers who bought that have also bought this…”. Instead, you might find that people looking for “loafers” often buy “sneakers”. Or people looking for “equity” in the online help often wind up at the “stocks” page sooner or later.

For the specific improvement of documentation, Rachel had prepared several exercises. The one I found most helpful had us develop an action to achieve a business goal with documentation content. This action consisted not only of a formula, a target and a frequency of what to measure in web analytics, but also of the underlying purpose and the underlying assumptions.

Rachel was very explicit on that last point: She warned us that we should always keep in mind our assumptions in our interpretations and in what we define as a successful outcome!

– I had never thought through how web analytics could be applied to documentation, so this was an insightful workshop for me. I believe that all three use cases can be applied successfully to complement user analysis and surveys. However, having documented web analytics software in a previous  job, I am skeptical about any absolute numbers to come out of the analysis. Instead, I trust them more to indicate relative change or progress.

Your turn

How have you used (or would you use) web analytics to monitor and improve documentation? Feel free to leave a comment.