Improve documentation with quality metrics

Quality metrics for technical communication are difficult, but necessary and effective.

They are difficult because you need to define quality standards and then measure compliance with them. They are necessary because they reflect the value add to customers (which quantitative metrics usually don’t). And they are effective because they are the only way to improve your documentation in a structured way in the long run.

Define quality standards

First, define what high quality documentation means to you. A good start is the book Developing Quality Technical Information: A Handbook for Writers and Editors from which I take these generic quality characteristics for documentation topics:

  • Is the topic task-oriented?
    Does it primarily reflect the user’s work environment and processes, and not primarily the product or its interface?
  • Is the topic up-to-date?
    Does it reflect the current version of the product or an older version?
  • Is the topic clear and consistent?
    Does it comply with your documentation style guide? If you don’t have one, consider starting from Microsoft’s Manual of Style for Technical Publications.
  • Is the topic accurate and sufficient?*
    Does it correctly and sufficiently describe a concept or instruct the customer to execute a task or describe reference information?
  • Is the topic well organised and well structured?*
    Does it follow an information model, if you have one, and does it link to relevant related topics?

* Measuring the last two characteristics requires at least basic understanding of topic-based authoring.

The seal of quality

You may have additional quality characteristics or different ones, depending on your industry, your customers’ expectations, etc. As you draft your definition, remember that someone will have to monitor all those characteristics for every single topic or chapter!

So I suggest you keep your quality characteristics specific enough to be measured, but still general enough so they apply to virtually every piece of your documentation. Five is probably the maximum number you can reasonably monitor.

Measure quality

The best time to measure quality is during the review process. So include your quality characteristics with your guidelines for reviewers.

If you’re lucky enough to have several reviewers for your contents, it’s usually sufficient to ask one of them to gauge quality. Choose the one who’s closest to your customers. For example, if you have a customer service rep and a developer review your topics, go with the former who’s more familiar with users’ tasks and needs.

To actually measure the quality of an online help topic or a chapter or section in a manual, ask the reviewer to use a simple 3-point scale for each of your quality characteristics:

  • 0 = Quality characteristic or topic is missing.
  • 1 = Quality characteristic is sort of there, but can obviously be improved.
  • 2 = Quality characteristic is fairly well developed.

Now, such metrics sound awfully loose: Quality “is sort of there” or “fairly well developed”…? I suggest this for sheer pragmatic purposes: Unless you have a small number of very disciplined writers and reviewers, quality metrics are not exact science.

The benefit of metrics is relative, not absolute. They help you to gauge the big picture and improvement over time. The point of such a loose 3-point scale is to keep it efficient and to avoid arguments and getting hung up on pseudo-exactitude.

Act on quality metrics

With your quality scores, you can determine

  • A score per help topic or user manual chapter
  • An average score per release or user manual
  • Progress per release or manual over time

Areas where scores lag behind or don’t improve over time give you a pretty clear idea about where you need to focus: You may simply need to revise a chapter. Or you may need to boost writer skills or add resources.

Remember that measuring quality during review leaves blind spots in areas where you neither write nor review. So consider doing a complete content inventory or quality assessment!

Learn more

There are several helpful resources out there:

  • The mother lode of documentation quality and metrics is the book Developing Quality Technical Information by Gretchen Hargis et al. with helpful appendixes, such as
    • Quality checklist
    • Who checks which quality characteristics?
    • Quality characteristics and elements
  • Five similar metrics, plus a cute duck, appear in Sarah O’Keefe’s blog post “Calculating document quality (QUACK)
  • Questionable vs. value-adding metrics are discussed in Donald LeVie’s article “Documentation Metrics: What Do You Really Want to Measure” which appeared in STC’s intercom magazine in December 2000.
  • A summary and checklist from Hargis’ book is Lori Fisher’s “Nine Quality Characteristics and a Process to Check for Them”**.
  • The quality metrics process is covered more thoroughly in “Quality Basics: What You Need to Know to Get Started”** by Jennifer Atkinson, et al.

** The last two articles are part of the STC Proceedings 2001 and used to be easily available via the EServer TC Library until the STC’s recent web site relaunch effectively eliminated access to years’ worth of resources. Watch this page to see if the STC decides to make them available again.

Your turn

What is your experience with quality metrics? Are they worth the extra effort over pure quantitative metrics (such as topics or pages produced per day)? Are they worth doing, even though they ignore actual customer feedback and demands as customer service reps can register? Please leave a comment.

6 Responses

  1. [...] This post was mentioned on Twitter by Stefan Klöckner, Technical Comms. Technical Comms said: Improve documentation with quality metrics http://bit.ly/dVar8I via @techwriterkai [...]

  2. Hallo Kai

    Great post! I agree with you that it’s valuable to compile your own criteria to suit your customers and product.

    Once you have collected some metrics, it would be useful to have a tool to store and crunch the numbers. I guess a spreadsheet would do the trick, but do you know of any ready-made tools?

    Another idea is to collect metrics from the readers too, by letting them rate your topics. You could add these ratings to your internally-compiled metrics and get all sorts of interesting comparisons.

    I don’t have much experience in this area, but I think it has good potential for interesting and useful results. Thanks for putting together such a well-researched post!

    Cheers, Sarah

  3. Kai, do you have any experience in matching up the internal rating system you describe here (or similar ones by others) against measures based on something like customer feedback, ratings, and/or traffic? In particular, do you know if people have been able to make a correlation between the metrics criteria described here and customer satisfaction?

    I ask because we have become quite reliant on external feedback (via the measures I note above, plus some additional), and one of the interesting things about high-rated topics (articles) is that it’s surprisingly difficult to abstract the qualities that seem to make then popular and then apply those qualities to other topics. More concisely, it’s more difficult than we originally thought to create topics that users really like. (Not a profound insight, just an observation w/r/t metrics.)

    Anyway, any thoughts you might have on this match-up between metrics and customer sat would be very welcome! Thx.

    • Good questions, Mike.

      No, I don’t know of any measurable correlation between increased quality and increased customer satisfaction. I would think that this correlation exists, but I haven’t seen it proven.

      I think your project to transfer qualities of popular topics to other topics is very fascinating and admirable. If you find it’s difficult (or even impossible), maybe that’s similar to news stories or songs or movies or books where even seasoned pros are frequently surprised at what flies and what doesn’t…?

      Also, I’m thinking that what’s popular and what’s good in documentation might be two separate qualities. Good documentation is hard to deliver, it’s frequently expected, but rarely praised. Popular documentation can be a genius shortcut or workaround, maybe even a trick to use a product in an unintended way…

      So maybe these two qualities can exist side by side: Use feedback, ratings, traffic to determine which sort of topics to provide. And use quality metrics to ensure that the topics are solid, consistent and (technically) useful.

      I know I’m dodging your question, but I’m not sure how the good and the popular can be merged – beyond doing both in parallel. Hope it helps anyway… :-)

      • Thanks for your thoughts! Your point about the difference between popular and good doc is a valid one. There is also the issue that user-based metrics are not necessarily the entire story either — as a counter-example, for example, we know that topics that are primarily collections of links to additional material are often rated low, presumably because of user frustration that the topic does not answer their immediate question. But it doesn’t mean that the topic is not good, in a sense that it provides a discovery mechanism for topics that might otherwise be difficult to find. Or something.

        Anyway, if you do run across any information that somehow correlates quantifiable qualities of topics against customer satisfaction, I’d sure love to hear about it!

        Thx!

  4. Good post. And I agree, Developing Quality Technical Information is an excellent resource!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 752 other followers

%d bloggers like this: