When tech comm does other teams’ work

Should documentation be expected to do the job of other teams and departments to make up for their shortcomings?

That was the essential question in an interesting conversation I once had with a fellow tech writer. It’s basically a twist on the general idea that you often have to pick two of the three: High quality, speed, low cost.

The writer said she had changed her way of writing user manuals to include fewer screenshots, because some of them were not essential to the described task, and they’re difficult to translate and update.

Less than efficient reuse

Her manager didn’t exactly appreciate her efforts to create similarly useful documentation more efficiently, but objected to the new manual. He asked her to use more screenshots as before, arguing they were necessary for three additional purposes of the manual:

  • Sell the product – though dedicated sales materials are also available.
  • Double as self-study training material – though training is also available.
  • Make up for poor usability where the software isn’t so intuitive in some places.

She is totally capable of creating documentation that can also be used for these purposes. (Apparently, the company didn’t have a content strategy which easily allows to share contents between sales, training and documentation.) But it’s just difficult to do it all when she faces pressure to limit cost and throughput time at the same time.

Possible reactions

During the conversation, my position was that her company can’t reasonably expect her to make documentation more efficient while she continues to make up for other shortcomings. (This ties in with the “Two words every (lone) writer should know“, which are “Later” and “No”.)

Then I thought that maybe her manager doesn’t actually know which tasks an efficient technical writer does best (if you can even generalize that) and what to expect from the sales, marketing and training teams. In this case, it might even be an opportunity to clean up processes beyond just documentation (though this easily gets presumptuous).

Your turn

What do you think? Have you been expected to do things that got in the way of your efficiency and that were really the tasks of other teams? How can we writers deal with that? Any ideas? Please leave a comment.

Organize bookmarks in folders by task

Make the most of bookmarks by sorting them by tasks.

That’s the lesson I learned after months of trudging along with a looong list of bookmarks. I would add links for later reading, then randomly pick pages to read or otherwise process, and it was not working.

Around New Year’s, I cleaned up the mess, threw out dead links and pages that seemed merely vaguely interesting. The remainder went into separate folders. This has been working really well for me so far: I remember and find bookmarks faster, and I process them faster (which means reading and filing or deleting).

Organize by task, not by content

The trick for me was to create bookmark folders by tasks, not by subject matter or contents! I now have the following bookmark folders:

  • Tech Writing with pages to read and archive useful ones.
  • Blog with pages to write about in this blog.
  • [Project] with pages to work on for articles or presentations, one per project.
  • Portable Freeware with pages of software to try out.
  • Travel with pages of hotels and restaurants to visit.
  • Inspire with pages to get me unstuck from writer’s block.
  • Lookup with pages to look up grammar rules, translations, prices, streets, etc.

Why does this work?

It works for two reasons:

  1. It’s basically following our own professional advice: Create documentation task-based, not based on the stuff you organize, whether it’s a user interface or random web pages.
  2. It’s inspired by GTD (Getting Things Done): Get productive by ensuring you can simply crank widgets as a tech writer.

Bonus tip

Sync your bookmarks across all your machines for even better order and productivity. I use xmarks for Firefox to ensure all my bookmarks are always up to date, regardless of which PC I use. When I’m on someone else’s machine, I can still look up my bookmarks online.

Your turn

How do you organize your tech comm bookmarks? Do folders work? Is there a better way? Please leave a comment.

Improvements in Kai’s Tech Writing Blog

I’m now maintaining a page about past and upcoming articles and conferences called Elsewhere. It’s always available via the header tab.

And I have new header photograph which a couple of readers have asked about: I have a passion for the American Southwest and like to hike its deserts and mountains. The picture shows springtime in the Huachuca Mountains of southeastern Arizona. The view is from Coronado Peak looking east towards the San Pedro river valley. Walk off to the right/south for a couple of miles, and you’re in Mexico.

Improve documentation with quality metrics

Quality metrics for technical communication are difficult, but necessary and effective.

They are difficult because you need to define quality standards and then measure compliance with them. They are necessary because they reflect the value add to customers (which quantitative metrics usually don’t). And they are effective because they are the only way to improve your documentation in a structured way in the long run.

Define quality standards

First, define what high quality documentation means to you. A good start is the book Developing Quality Technical Information: A Handbook for Writers and Editors from which I take these generic quality characteristics for documentation topics:

  • Is the topic task-oriented?
    Does it primarily reflect the user’s work environment and processes, and not primarily the product or its interface?
  • Is the topic up-to-date?
    Does it reflect the current version of the product or an older version?
  • Is the topic clear and consistent?
    Does it comply with your documentation style guide? If you don’t have one, consider starting from Microsoft’s Manual of Style for Technical Publications.
  • Is the topic accurate and sufficient?*
    Does it correctly and sufficiently describe a concept or instruct the customer to execute a task or describe reference information?
  • Is the topic well organised and well structured?*
    Does it follow an information model, if you have one, and does it link to relevant related topics?

* Measuring the last two characteristics requires at least basic understanding of topic-based authoring.

The seal of quality

You may have additional quality characteristics or different ones, depending on your industry, your customers’ expectations, etc. As you draft your definition, remember that someone will have to monitor all those characteristics for every single topic or chapter!

So I suggest you keep your quality characteristics specific enough to be measured, but still general enough so they apply to virtually every piece of your documentation. Five is probably the maximum number you can reasonably monitor.

Measure quality

The best time to measure quality is during the review process. So include your quality characteristics with your guidelines for reviewers.

If you’re lucky enough to have several reviewers for your contents, it’s usually sufficient to ask one of them to gauge quality. Choose the one who’s closest to your customers. For example, if you have a customer service rep and a developer review your topics, go with the former who’s more familiar with users’ tasks and needs.

To actually measure the quality of an online help topic or a chapter or section in a manual, ask the reviewer to use a simple 3-point scale for each of your quality characteristics:

  • 0 = Quality characteristic or topic is missing.
  • 1 = Quality characteristic is sort of there, but can obviously be improved.
  • 2 = Quality characteristic is fairly well developed.

Now, such metrics sound awfully loose: Quality “is sort of there” or “fairly well developed”…? I suggest this for sheer pragmatic purposes: Unless you have a small number of very disciplined writers and reviewers, quality metrics are not exact science.

The benefit of metrics is relative, not absolute. They help you to gauge the big picture and improvement over time. The point of such a loose 3-point scale is to keep it efficient and to avoid arguments and getting hung up on pseudo-exactitude.

Act on quality metrics

With your quality scores, you can determine

  • A score per help topic or user manual chapter
  • An average score per release or user manual
  • Progress per release or manual over time

Areas where scores lag behind or don’t improve over time give you a pretty clear idea about where you need to focus: You may simply need to revise a chapter. Or you may need to boost writer skills or add resources.

Remember that measuring quality during review leaves blind spots in areas where you neither write nor review. So consider doing a complete content inventory or quality assessment!

Learn more

There are several helpful resources out there:

  • The mother lode of documentation quality and metrics is the book Developing Quality Technical Information by Gretchen Hargis et al. with helpful appendixes, such as
    • Quality checklist
    • Who checks which quality characteristics?
    • Quality characteristics and elements
  • Five similar metrics, plus a cute duck, appear in Sarah O’Keefe’s blog post “Calculating document quality (QUACK)
  • Questionable vs. value-adding metrics are discussed in Donald LeVie’s article “Documentation Metrics: What Do You Really Want to Measure” which appeared in STC’s intercom magazine in December 2000.
  • A summary and checklist from Hargis’ book is Lori Fisher’s “Nine Quality Characteristics and a Process to Check for Them”**.
  • The quality metrics process is covered more thoroughly in “Quality Basics: What You Need to Know to Get Started”** by Jennifer Atkinson, et al.

** The last two articles are part of the STC Proceedings 2001 and used to be easily available via the EServer TC Library until the STC’s recent web site relaunch effectively eliminated access to years’ worth of resources. Watch this page to see if the STC decides to make them available again.

Your turn

What is your experience with quality metrics? Are they worth the extra effort over pure quantitative metrics (such as topics or pages produced per day)? Are they worth doing, even though they ignore actual customer feedback and demands as customer service reps can register? Please leave a comment.

How you can exploit the “Big Disconnect”

By way of consumers, web 2.0 and social media present a disruptive influence on corporate IT: Existing “systems of records” face challenges by new “systems of engagement”.

The thesis is by Geoffrey Moore in an AIIM white paper and presentation, and I’ve come across it by following some links in Sarah O’Keefe’s post “The technical communicator’s survival guide for 2011“. So once again, I find a great idea, summarize and apply it, instead of thinking up something wildly original myself. (Maybe not the worst skill for a tech writer, come to think of it… 🙂 )

Out-of-sync IT developments

Moore’s premise builds on out-of-sync advances of corporate vs. consumer IT:

  • Corporate IT developments recently focused on optimizing and consolidating otherwise mature, database-based “systems of record” which execute all kinds of transactions for finance, enterprise resource planning, customer relationship management, supply chain, etc.
  • Consumer IT, on the other hand, saw the snowballing improvements in access, bandwidth and mobile devices which have quickly pervaded ever more spheres of everyday culture.

“The Big Disconnect”

This imbalance leads to the pivotal insight of Moore’s analysis: As I read it, the disruptive influence on corporate IT occurs not through technologies or processes, but through people.

People are quick to adopt or reject or abandon new consumer IT tools and habits that cater to their needs. The same people feel hampered by corporate systems and workflows that seem unsuitable and obsolete. Moore calls it “The Big Disconnect”:

How can it be that
I am so powerful as a consumer
and so lame as an employee?

How consumer IT affects corporate IT

For the next 10 years, Moore expects that interactive, collaborative “systems of engagement” will influence and complement, though not necessarily replace traditional “systems of record”:

  • Old systems are data-centric, while new systems focus on users.
  • Old systems have data security figured out, new systems make privacy of user data a key concern.
  • Old systems ensure efficiency, new systems provide effectiveness in relationships.

For a full comparison of the two kinds of systems, see Moore’s presentation “A ‘Future History’ of Content Management“, esp. slides 10-12 and 16.

But does it hold water?

Moore’s analysis has convinced me. I used to think that corporate and consumer IT markets differ because requirements and purchase decisions are made differently. But this cannot do away with the “Big Disconnect” which I’ve seen time and again in myself and in colleagues. Personally, I know that this frustration is real and tangible.

Also, the development of wikis and their corporate adoption is a good case study of the principle that Moore describes. If you know of other examples, please leave a comment.

What does it mean to tech comm?

The “Big Disconnect” affects those of us in technical communications in corporate IT in several ways.

Tech writers write for disconnected corporate consumers. So we do well to integrate some of the features of “systems of engagement” that Moore describes:

  • Add useful tips & tricks to reference facts.
  • Provide discussion forums to complement authoritative documentation.
  • Ensure quick and easy access to accurate and complete documentation.

But technical communications can do one better by helping to ease the drawbacks of engaging systems:

  • Offer easy, comprehensive searches through disparate formats and sources.
  • Moderate forums and user-generated contents carefully to maintain high content standards and usability.

Tech writers are disconnected corporate consumers. So we can push for the improvement of the products and processes we describe or use.

  • On consumers’ behalf, we can advocate for improved usability and for documentation that is more efficient to use.
  • On our own behalf, we can insist to improve workflows that serve a system rather than us writers and our processes.
  • We can urge to replace help authoring systems that support only fragments of our documentation workflows with more efficient tools.

Our managers are also disconnected, most likely. So when we argue for any of the above disruptions, we can probably fall back on their experience when we have to justify them. We’ll still need good metrics and ROI calculations, though… 🙂

To read further…

The “Big Disconnect” and its effects connects nicely with a couple of related ideas:

Your turn

Does the Big Disconnect make sense to you – or is it just the mundane in clever packaging? Do you think it’s relevant for technical communications? How else can we tech writers exploit it? Please leave a comment.

Recommended read: Practice technical writing

Becoming a better tech writer requires practice.

Mike Pope, tech editor at Microsoft in Seattle, has a brilliant blog post about 12 ways to practice tech writing. The catch is he means “practice” like a musician, so you learn to do stuff better than yesterday – instead of just doing the same things over and over.

Over the last years, I’ve tried all 12 ways, and they’ve all helped me to become a better writer. And most of them can be fun, too, at least most of the time… 🙂

Here are just six of the ways as a teaser, but I highly recommend you head on over to Mike’s post to find out about all of them with details and examples.

1. Read other technical writing attentively.
2. Read about writing.
5. Writing something outside your usual material.
7. Edit someone else’s work.
10. Learn new tools and new ways to use your existing tools.
11. Talk to other writers.