Improve documentation with quality metrics

Quality metrics for technical communication are difficult, but necessary and effective.

They are difficult because you need to define quality standards and then measure compliance with them. They are necessary because they reflect the value add to customers (which quantitative metrics usually don’t). And they are effective because they are the only way to improve your documentation in a structured way in the long run.

Define quality standards

First, define what high quality documentation means to you. A good start is the book Developing Quality Technical Information: A Handbook for Writers and Editors from which I take these generic quality characteristics for documentation topics:

  • Is the topic task-oriented?
    Does it primarily reflect the user’s work environment and processes, and not primarily the product or its interface?
  • Is the topic up-to-date?
    Does it reflect the current version of the product or an older version?
  • Is the topic clear and consistent?
    Does it comply with your documentation style guide? If you don’t have one, consider starting from Microsoft’s Manual of Style for Technical Publications.
  • Is the topic accurate and sufficient?*
    Does it correctly and sufficiently describe a concept or instruct the customer to execute a task or describe reference information?
  • Is the topic well organised and well structured?*
    Does it follow an information model, if you have one, and does it link to relevant related topics?

* Measuring the last two characteristics requires at least basic understanding of topic-based authoring.

The seal of quality

You may have additional quality characteristics or different ones, depending on your industry, your customers’ expectations, etc. As you draft your definition, remember that someone will have to monitor all those characteristics for every single topic or chapter!

So I suggest you keep your quality characteristics specific enough to be measured, but still general enough so they apply to virtually every piece of your documentation. Five is probably the maximum number you can reasonably monitor.

Measure quality

The best time to measure quality is during the review process. So include your quality characteristics with your guidelines for reviewers.

If you’re lucky enough to have several reviewers for your contents, it’s usually sufficient to ask one of them to gauge quality. Choose the one who’s closest to your customers. For example, if you have a customer service rep and a developer review your topics, go with the former who’s more familiar with users’ tasks and needs.

To actually measure the quality of an online help topic or a chapter or section in a manual, ask the reviewer to use a simple 3-point scale for each of your quality characteristics:

  • 0 = Quality characteristic or topic is missing.
  • 1 = Quality characteristic is sort of there, but can obviously be improved.
  • 2 = Quality characteristic is fairly well developed.

Now, such metrics sound awfully loose: Quality “is sort of there” or “fairly well developed”…? I suggest this for sheer pragmatic purposes: Unless you have a small number of very disciplined writers and reviewers, quality metrics are not exact science.

The benefit of metrics is relative, not absolute. They help you to gauge the big picture and improvement over time. The point of such a loose 3-point scale is to keep it efficient and to avoid arguments and getting hung up on pseudo-exactitude.

Act on quality metrics

With your quality scores, you can determine

  • A score per help topic or user manual chapter
  • An average score per release or user manual
  • Progress per release or manual over time

Areas where scores lag behind or don’t improve over time give you a pretty clear idea about where you need to focus: You may simply need to revise a chapter. Or you may need to boost writer skills or add resources.

Remember that measuring quality during review leaves blind spots in areas where you neither write nor review. So consider doing a complete content inventory or quality assessment!

Learn more

There are several helpful resources out there:

  • The mother lode of documentation quality and metrics is the book Developing Quality Technical Information by Gretchen Hargis et al. with helpful appendixes, such as
    • Quality checklist
    • Who checks which quality characteristics?
    • Quality characteristics and elements
  • Five similar metrics, plus a cute duck, appear in Sarah O’Keefe’s blog post “Calculating document quality (QUACK)
  • Questionable vs. value-adding metrics are discussed in Donald LeVie’s article “Documentation Metrics: What Do You Really Want to Measure” which appeared in STC’s intercom magazine in December 2000.
  • A summary and checklist from Hargis’ book is Lori Fisher’s “Nine Quality Characteristics and a Process to Check for Them”**.
  • The quality metrics process is covered more thoroughly in “Quality Basics: What You Need to Know to Get Started”** by Jennifer Atkinson, et al.

** The last two articles are part of the STC Proceedings 2001 and used to be easily available via the EServer TC Library until the STC’s recent web site relaunch effectively eliminated access to years’ worth of resources. Watch this page to see if the STC decides to make them available again.

Your turn

What is your experience with quality metrics? Are they worth the extra effort over pure quantitative metrics (such as topics or pages produced per day)? Are they worth doing, even though they ignore actual customer feedback and demands as customer service reps can register? Please leave a comment.

Advertisements

How you can exploit the “Big Disconnect”

By way of consumers, web 2.0 and social media present a disruptive influence on corporate IT: Existing “systems of records” face challenges by new “systems of engagement”.

The thesis is by Geoffrey Moore in an AIIM white paper and presentation, and I’ve come across it by following some links in Sarah O’Keefe’s post “The technical communicator’s survival guide for 2011“. So once again, I find a great idea, summarize and apply it, instead of thinking up something wildly original myself. (Maybe not the worst skill for a tech writer, come to think of it… 🙂 )

Out-of-sync IT developments

Moore’s premise builds on out-of-sync advances of corporate vs. consumer IT:

  • Corporate IT developments recently focused on optimizing and consolidating otherwise mature, database-based “systems of record” which execute all kinds of transactions for finance, enterprise resource planning, customer relationship management, supply chain, etc.
  • Consumer IT, on the other hand, saw the snowballing improvements in access, bandwidth and mobile devices which have quickly pervaded ever more spheres of everyday culture.

“The Big Disconnect”

This imbalance leads to the pivotal insight of Moore’s analysis: As I read it, the disruptive influence on corporate IT occurs not through technologies or processes, but through people.

People are quick to adopt or reject or abandon new consumer IT tools and habits that cater to their needs. The same people feel hampered by corporate systems and workflows that seem unsuitable and obsolete. Moore calls it “The Big Disconnect”:

How can it be that
I am so powerful as a consumer
and so lame as an employee?

How consumer IT affects corporate IT

For the next 10 years, Moore expects that interactive, collaborative “systems of engagement” will influence and complement, though not necessarily replace traditional “systems of record”:

  • Old systems are data-centric, while new systems focus on users.
  • Old systems have data security figured out, new systems make privacy of user data a key concern.
  • Old systems ensure efficiency, new systems provide effectiveness in relationships.

For a full comparison of the two kinds of systems, see Moore’s presentation “A ‘Future History’ of Content Management“, esp. slides 10-12 and 16.

But does it hold water?

Moore’s analysis has convinced me. I used to think that corporate and consumer IT markets differ because requirements and purchase decisions are made differently. But this cannot do away with the “Big Disconnect” which I’ve seen time and again in myself and in colleagues. Personally, I know that this frustration is real and tangible.

Also, the development of wikis and their corporate adoption is a good case study of the principle that Moore describes. If you know of other examples, please leave a comment.

What does it mean to tech comm?

The “Big Disconnect” affects those of us in technical communications in corporate IT in several ways.

Tech writers write for disconnected corporate consumers. So we do well to integrate some of the features of “systems of engagement” that Moore describes:

  • Add useful tips & tricks to reference facts.
  • Provide discussion forums to complement authoritative documentation.
  • Ensure quick and easy access to accurate and complete documentation.

But technical communications can do one better by helping to ease the drawbacks of engaging systems:

  • Offer easy, comprehensive searches through disparate formats and sources.
  • Moderate forums and user-generated contents carefully to maintain high content standards and usability.

Tech writers are disconnected corporate consumers. So we can push for the improvement of the products and processes we describe or use.

  • On consumers’ behalf, we can advocate for improved usability and for documentation that is more efficient to use.
  • On our own behalf, we can insist to improve workflows that serve a system rather than us writers and our processes.
  • We can urge to replace help authoring systems that support only fragments of our documentation workflows with more efficient tools.

Our managers are also disconnected, most likely. So when we argue for any of the above disruptions, we can probably fall back on their experience when we have to justify them. We’ll still need good metrics and ROI calculations, though… 🙂

To read further…

The “Big Disconnect” and its effects connects nicely with a couple of related ideas:

Your turn

Does the Big Disconnect make sense to you – or is it just the mundane in clever packaging? Do you think it’s relevant for technical communications? How else can we tech writers exploit it? Please leave a comment.

Getting ahead as a lone author, the article

“Getting ahead as a lone author”, based on my presentation in last September’s TCUK conference, appeared as a 3.5-page article in the current Winter 2010 issue of ISTC’s Communicator.

Click the cover to download the article in PDF.

Click to download the article in PDF.

I’ve covered lone authors over the last months in blog posts and in my presentation, after which Katherine Judge, commissioning editor of ISTC’s quarterly, asked me to write it up as an article which I share with you today.

It’s a concise summary of my talk, along these headings:

  • Overcome benign neglect
  • Buy yourself time
    • Implement topic-based authoring
    • Don’t test when you should be documenting
    • Learn to say ‘later’ and ‘no’
    • Control interruptions
  • Treat documentation as a business
    • Make documentation an asset
    • Estimate documentation effort
    • Plan documentation properly
    • Embrace reporting and metrics

Learn about DITA in a couple of hours

DITA 101, second edition, by Ann Rockley and others is one of the best tool-independent books about DITA. It’s a good primer to learn about DITA in a couple of hours.

Strong context

The book excels in firmly embedding DITA’s technologies and workflows in the larger contect of structured writing and topic-based authoring.

DITA 101, 2nd edition, cover I attribute this to the authors’ years of solid experience in these areas which comes through, especially in the earlier chapters.

“The value of structure in content,” the second chapter, illustrates structured writing with the obvious example of cooking recipes. Then it goes on to show you how to deduce a common structure from three realistically different recipes – which I hadn’t seen done in such a clear and concise way.

“Reuse: Today’s best practice,” the third chapter, takes a high-level perspective. First it acknowledges organizational habits and beliefs that often prevent reuse. Then it presents good business reasons and ROI measures that show why reuse makes sense.

Comprehensive, solid coverage

From the fourth chapter on, Rockley and her co-authors describe DITA and its elements very well from various angles:

  • “Topics and maps – the basic building blocks of DITA” expands on the DITA specification with clear comments and helpful examples.
  • “A day in the life of a DITA author” is very valuable for writers who are part of a DITA project. Writing DITA topics and maps is fundamentally different from writing manuals, and this chapter highlights the essential changes in the authoring workflow.
  • “Planning for DITA” outlines the elements and roles in a DITA implementation project for the project manager. Don’t let the rather brief discussion fool you: Without analyzing content and reuse opportunities, without a content strategy and without covering all the project roles, you expose your DITA project to unnecessary risk.
  • “Calculating ROI for your DITA project” has been added for the second edition. It’s by co-author Mark Lewis, based on his earlier white papers: “DITA Metrics: Cost Metrics” and “DITA Metrics: Similarities and Savings for Conrefs and Translation“. It expands on the ROI discussion of chapter 3 and creates minor inconsistencies that weren’t eliminated in the editing process.
  • “Metadata” first introduces the topic and its benefits in general and at length. Then it describes the types and usefulness of metadata in DITA. This might seem a little pedestrian, but it’s actually helpful for more conventional writers and for managers. It ensures they fully understand this part of DITA which drives much of its efficiencies and workflows.
  • “DITA and technology” explains elements and features to consider when you select a DITA tool, content management system or publishing system. This always tricky to do in a book as much depends on your processes, organization and budget. While the chapter cannot substitute for good consulting, it manages to point out what you get yourself into and what to look out for.
  • “The advanced stuff” and “What’s new in DITA 1.2” continue the helpful elucidation of the DITA specification with comments and examples that was begun in the “Topics and maps” chapter.

Mediocre organization

For all its useful contents, the book deserves better, clearer organization!

  • Redundancies and minor inconsistencies occur as concepts are defined and discussed in several places. For example, topics are defined on pages 4, 24 and 46. The newly added ROI chapter complements the ROI points in the third chapter, but neither has cross-references to the other.
  • The index doesn’t always help you to connect all the occurrences and navigate the text.
  • Chapters are not numbered, yet numbering of figures in each chapter starts at 1. It’s not a big problem, because references to figures always refer to the “nearest” number, it’s just irritating.

Formal errors

The book contains several errors which add to the impression of poor production values. They don’t hurt the overall message or comprehensibility, but they are annyoing anyway:

  • Mixed up illustrations such as the properties box in Word (page 72) vs. the properties box from the File Manager (73)
  • Spelling errors such as “somtimes” (1) and “execeptions” (16)
  • Problems with articles such as “a author” (20) and or a system that “has ability to read this metadata” (77)
  • Common language mistakes such “its” instead of “it’s” (52)

Lack of competition

Another reason why it’s still one of the best books on the topic is that there simply aren’t many others!

  • Practical DITA by Julio Vazquez is the only serious contender, and its practical, in-the-trenches advice complements Rockley’s book very well.
  • [More books are pointed out in the comments, thanks everybody! – Added January 11, 2010.]
  • DITA Open Toolkit by “editors” Lambert M. Surhone, Mariam T. Tennoe, Susan F. Henssonow is a compilation of Wikipedia articles. Amazon reviewers call other titles produced by the same editing team and publisher a scam.

Of course, several other honorable and worthwhile books include articles or chapters on DITA and/or discuss DITA in context of specific tools.

My recommendation

Despite its shortcomings, the book’s own claim is valid: “If you’re in the process of implementing DITA, expect to do so in the future, or just want to learn more about it without having to wade through technical specifications, this is the book for you.”

I recommend that you read it if you are

  • Involved in a project to implement DITA
  • Writing or translating documentation in a DITA environment
  • Managing technical writers

Your turn

Have you read this book? What’s your opinion? Can you recommend other books or resources about DITA? Feel free to leave a comment!

Welcome to summer reruns, part 2

My blog and I are taking it a little easier towards the end of the summer.

I’ve had a wonderful time with this blog so far, and I thank each and every one of you for reading, lurking or commenting. I’ve learned a lot from your comments, and I appreciate your support! It’s been a warm summer’s breeze… 🙂

As we’re gearing up for the new season, here are some reruns from the last year or so.

Popular posts from my blog

Top 10 things that users want to do in a help system

This is one of my two most popular posts by far where I draw a parallel to a department store or a library to illustrate how customers want to navigate each one.

Reality check: Writing for scannability and localization

What happens if our nobel attempts at clear topic structures and parallel sentence structures meet head on with unsuspecting readers?

Portable apps for tech writers

This is the first post in a four-part series where I recommend free and (mostly) light-weight applications that can help any tech writer in his daily tasks.

Noteworthy posts from elsewhere

If you want to get an introduction into content strategy, I think, you could do a lot worse than reading these excellent posts:

Complete Beginner’s Guide to Content Strategy

Content Lifecycle

The extraordinary world of content strategists

The last two posts are beginnings of series, so be sure to follow the links at the end of each.

Welcome to summer reruns, part 1

My blog and I are taking it a little easier towards the end of the summer.

I’ve had a wonderful time with this blog so far, and I thank each and every one of you for reading, lurking or commenting. I’ve learned a lot from your comments, and I appreciate your support! It’s been a walk on the beach… 🙂

As we’re gearing up for the new season, here are some reruns from the last year or so.

Popular posts from my blog

Trends in technical communication 2010

This is one of my two most popular posts by far. With the help of several readers, we’re commenting on and discussing two trends from a Scriptorium webinar “Technical Commmunication Trends for 2010 and Beyond”. Sarah O’Keefe predicts that it will include content curation. And Ellis Pratt proposes that technical communications will soon also shape an emotional user experience. Incidentally, Ellis will speak on the same topic at TCUK, so go see him, if you have a chance!

Making it as a lone writer

This is the first post in a small series where I share lessons learned and best practices how lone writers can get ahead. Incidentally, I will speak on the same topic at TCUK, so come see me, if you have a chance!

Reading outside the tech writing box

I’ve found that reading helps my writing, even off-topic reading. Technology journalism works especially well for me. I share my favorite magazines and anthologies and link to five articles that you can read online.

Noteworthy posts from elsewhere

Speaking of reading around. If you want to read up on neighboring disciplines, I recommend these three excellent posts:

Complete Beginner’s Guide to Content Strategy

Complete Beginner’s Guide to Information Architecture

Standardized Approaches to Usability

What tech writers can learn from UX designers

We tech writers have a lot in common with user experience designers, really!

Even though we may step on each others turf occasionally, we share a common objective: Make customers and users successful with our products. Add the fact that our jobs combine creative challenges with restricted resources, our situations are structurally so similar that we can learn a lot from each other.

So if we technical communicators are serious about overcoming departmental silos with our contents, we should also pick up a few tricks from the guys and gals in the next room or cubicle…

Here are just some a few lessons I’ve learned from blogging UX designers that speak to us technical communicators as well. Incidentally, they are strategic and processual, but I’m sure we can also learn from their specific methods:

Doing stuff you weren’t hired to do? Keith LaFerriere’s “Why Did You Hire Me?” has some advice for you. It easily applies to technical writers who find themselves writing for marketing or editing and polishing anything from offers for sales to presentations for the exexcutive board. (All of these are fine, respectable tasks – but maybe they’re not what you excel at or where you see yourself going…)

Countering less than brilliant ideas which managers or customers may have about your work is the topic of Whitney Hess in “No One Nos: Learning to Say No to Bad Ideas“: “It is my job to put an end to bad design practices within an organization before I can make any progress on improving the lives of our customers.” Replace “design” with “documentation”, and you have yourself a mission. Whitney uses best practices and data to make her case and also shows just how to say “no”.

Winning your manager’s support to implement a new method, such as topic-based authoring, is essential, but difficult. Especially if the cost-benefit argument unexpectedly fails. In “Selling Usability to Your Manager“, David Travis recommends a more psychological approach to present salient benefits beyond the bottom line.

Getting buy-in for better processes across your company is the subject of a virtual panel discussion “Evangelizing UX Across an Entire Organization” written up by Janet M. Six. Designers and information architects weigh in with helpful advice, such as:

  • “For different groups to embrace [it], those groups need to see the value in the process — not only for the organization as a whole, but for their particular role and discipline.”
  • “Practitioners should reframe the issue by asking managers to support and enhance the ongoing satisfaction of the customer experience.”
  • “The one clear finding that has come out of the entire UX movement is that focusing on your customers is the surest, most direct way for any company to make money.”

Realigning can be a good alternative to redesigning, as Cameron Moll explains in “Good Designers Redesign, Great Designers Realign“. The “Incessant Redesign” seems to afflict designers more often than technical communicators. Still, his basic idea also applies to documentation: “The desire to redesign is aesthetic-driven, while the desire to realign is purpose-driven.” There’s a place for either type of project; Cameron’s post helps you to make the right decision.

– What alliances with other departments have you found helpful? What methods and tricks have you learned from colleagues in design, training, customer support, etc.? Feel free to leave a comment!