My webinar slides as PDF handout

If you’ve attended my webinar “Getting ahead as a lone writer”, you might be interested in the slides in PDF:

They were supposed to be made available to attendees by the STC, but apparently that hasn’t happened as I’ve just learned yesterday.

If you have any more questions about the webinar or being a lone writer, feel free to browse my previous posts or pose your questions in a comment below.

Advertisements

Auditing Documentation and Processes at tcworld11

Auditing your documentation, and your processes, can help you to gauge estimates and issues as you prepare for localization or content migration. That’s what I learned in Kit Brown-Hoekstra’s useful 2-hour workshop at tcworld (tekom’s international half).

You can easily do the audit yourself: Take a little time, step back from your documentation, and identify weaknesses and areas for improvement. Acting on your audit results, you can

  • Improve customer satisfaction
  • Decrease localization costs
  • Establish a baseline and a direction to develop your documentation
  • Calculate costs and benefits of changes

If you don’t have an express mandate for the audit, it can be worth it to do sort of a “draft audit”. It may come out a little patchy in places, but I think it can give you a first idea of where you stand. With the initial results and measures you can more easily get the time to do an in-depth audit. (But don’t be surprised if colleagues or managers hold you to the improvements you’ve uncovered… :-))

What to audit

The organization level

Perform a strategic SWOT analysis of Strengths, Weaknesses, Opportunities and Threats of your role in your organization. Internal strengths and external opportunities (mainly) will give you useful arguments to get buy-in from management for changes and further developments you plan. Internal weaknesses and external threats (mainly) help you to assess and manage risk as you proceed.

  • Strengths, for example, may include technical expertise and an understanding of user needs and tasks.
  • Weaknesses, for example, are poor self-marketing or resistance to change.
  • Opportunities, for example, can include agile development (which gives writers a better position in the process) and social media (if you adapt to them and moderate augmenting user-generated content).
  • Threats, for example, might be smaller documentation budgets or social media (if you do not adapt or cannot keep up with user-generated ontent).

Note how threats can be turned into opportunities, if you tackle them wisely! Or vise versa…

The process level

Assess your documentation process through all stages:

Requirements > Design > Writing > Review > Edit > Localization > Publication > Feedback > Modification > Deletion

Answer the following questions:

  • Are all stages well-defined?
  • Is it clear when and how you get from one stage to the next?
  • Do all participants in a stage know what to expect and what to deliver?
  • Can you measure the success of your process?

For the sake of an efficient process, imagine each hand-over between participants or stages as an interface and try to define what’s handed over when and how as well as possible.

The product level

Identify qualities and issues of the product you document to distinguish them from those in your documentation. Weaknesses in documentation often mirror weaknesses or issues in the product, e.g., a poorly designed user interface or a workaround that’s required to complete the user workflow.

You need to know about these issues separately, because they hurt your documentation, but you usually cannot fix them yourself. You can only supply band aid.

The documentation level

Assess the structural quality of your documentation (not the quality of a manual or each topic). Answer these questions:

  • Do you have a suitable information model? This is an architecure that defines the structure of your documentation on the level of deliverables (such as a manual or online help) and on module level (such as a topic or a section).
  • To what extent does your documentation comply with that information model?
  • Do you write documentation so the topics or sections are reusable?
  • Do you reuse topics or sections to the extent that is possible?
  • Do you write documentation so it is ready and easy to localize?
    • Do you use standardized sentences for warnings and recurring steps to minimize localization efforts?
    • Do you leave sufficient white space to accommodate for “longer” languages? For example, German and Russian require up to 30% more characters to say the same as English.

Also assess the content quality of your documentation (now look at some manuals and topics):

  • Is it appropriate for your audience and their tasks?
  • Is it correct, concise, comprehensible?
  • Remember to audit localized documentation, too.

It’s usually enough to audit 10-20% of them to spot 80-90% of the issues.

Audit for efficiency

  • Be objective. …as objective as you can, if you’re auditing your own documentation.
  • Collect issues. You can use a simple spreadsheet to collect your findings: Enter the issue, its impact, its current cost, and the cost to fix it.
  • Prioritize improvements. Ensure that a lower future cost makes the improvement worth doing, after you’ve added up the current cost and the cost to implement the improvement. Start with changes that cost the least and will save you the most.

Bonus tool

To really dive into quality assessment of your documentation, you can totally combine Kit’s audit process with Alice Jane Emanuel’s “Tech Author Slide Rule” which focuses on content quality. Use both and you have a good handle on your documentation – and more improvement opportunities than you can shake a stick at!

Your turn

Do you find this helpful to audit your documentation? Do you know a better way? Or do you think it’s not worth it? Feel free to leave a comment.

Join me for “Getting ahead as a lone writer” at tekom

If you’re attending the tekom conference in Wiesbaden, consider joining me for my updated presentation “Getting ahead as a lone writer” on October 19 at 8:45 a.m. in room 12C as part of tekom’s international, English-speaking tcworld conference.

tcworld conference at Wiesbaden, Germany, in October 2011

My presentation will be an updated version of the session I did at TCUK 10. I will talk about how to overcome neglect and raise your profile by running your job (more) like a business with best practices. Here’s the abstract:

Lone writers are often the only person in the company who creates and maintains documentation. They often operate without a dedicated budget or specific managerial guidance. In this presentation, Kai Weber will draw on his experience to show lone writers how to make the most of this “benign neglect”:

  • How you can still develop your skills – and your career
  • How you can raise your profile with management and colleagues
  • How you can contribute to a corporate communication strategy
  • How you can help your company to turn documentation from a cost center into an asset

Twitter meetup afterwards

Join us on Wednesday at 9:35 am on the upper floor in the foyer in front of rooms 12C and D for a #techcomm meetup after the session! @rimo1012 and I, @techwriterkai, are presenting at the same time in adjacent rooms, so if you know us from twitter, stop by and say hi!

I’ll be blogging from the conference, so watch this space…

“Statistics without maths” workshop at #TCUK11

Technical Communication UK 2011 is off to good start with around 100 people attending six pre-conference half-day workshops on Tuesday. Even the night before saw about 20 attendees joining the organisers to help with last-minute setup chores, not to mention drinks and dinner.

On Tuesday afternoon, I attended the workshop “Statistics without maths: acquiring, visualising and interpreting your data” by Mike K. Smith, Chris Atherton and Karen Mardahl.

Mike K. Smith encourages us to insist on hard evidence

The workshop was virtually free of math in terms of formulas and calculations. Nonetheless, its introduction of concepts such as different average measurements mean vs. median vs. mode, or such as standard deviation vs. standard error challenged tech communicators. Personally, I’m more familiar with the finer points of language, not mathematical concepts, so it was a bit of a stretch for me.

The focus, however, was on general principles that give well-done statistics the power to infer a greater whole from representative data:

  • Strength of evidence, meaning the amount of data is large enough
  • Quality of data, meaning the data is good and useful to answer the question

A simple example illustrated these points:

1. Survey a group of people whether they like Revels, a British candy that comes with different fillings and hence different flavours, in general.

2. Hand out one Revel each to a smaller group of people and ask them how many liked the specific Revel they were given.

Frequently, the results of #2 are interpreted to mean #1. And that’s not even taking into consideration the alternative suggested by the workshop audience:

3. Watch a smaller group eat Revels (best without their knowing that they’re being watched) and draw your your conclusions how many really like Revels.

Another principle that was presented and discussed was that correlation measured by studies and statistics is not the same as causation: Two things that frequently or always occur together don’t mean that one causes the other. They could both be caused by a third overarching force. Or maybe there’s no causal relation between them at all…

The workshop about these concepts with dozens of examples also showed up a few cultural differences: Statisticians seem to strive for accuracy and precision to the point of not quite intelligible anymore, at least not outside their peer group.

I think some of the finer points about the definitions of averages and standard measurements (see above) were lost on some of us tech comm’ers. Still, the general message resonated with many: Statistics deserve close scrutiny, for the numbers they present, for the conditions in which they were measured and for the questions they seek to answer.

As Mike Smith put it towards the end:

What do we want?
Evidence-based change!
When do we want it?
After peer review!

Alice Jane Emanuel’s “Tech Author Side Rule” at #TCUK11

Technical Communication UK 2011 is off to good start with around 100 people attending six pre-conference half-day workshops on Tuesday. Even the night before saw about 20 attendees joining the organisers to help with last-minute setup chores, not to mention drinks and dinner.

On Tuesday morning, I attended Alice Jane Emanuel‘s workshop “The Tech Author Slide Rule: Measuring and improving documentation quality“. In a lively and engaging session, “AJ” taught us how to use the slide rule she came up with. It is actually an Excel spreadsheet that helps you measure qualities such as structure, navigation, language, and task orientation. You weigh a good 30 or so of such qualities in documentation, depending how important they are to you. Then you can grade a document (or a collection of topics after optional tweaking) by assigning points for each quality. The sheet sums up the weighed points per category and also for a total score.

AJ Emanuel, with David Farbey, before explaining the Tech Author Slide Rule

While the sheet is excellent to track progress over time, you can see results very quickly by comparing your current documentation with legacy deliverables. The quantified approach offers a range of benefits that are otherwise hard to come by for tech writers:

  • The numbered scores appeal to managers and make to easier for writers to show progress and accountability.
  • The standardized categories can help you to build a team by ensuring that everyone focuses on the same qualities and by pointing to problems where individual documents go off the rails. They also help to train new writers.
  • In general, it helps to raise the profile of technical communication by clarifying its contribution and giving everyone in the organization more specific terms and numbers to discuss.

AJ emphasized that you need to keep the tool’s categories and usage consistent: It’s fine to change or add categories, weights and ranges of available weights and scores, but remember that you jeopardize comparability of results when you do. It may be fine to add a handicap for special cases, but in general, beware of grade inflation and keep your grading consistent.

I think the tool is a great addition to any peer review/editing process when fellow tech writers assess style guide compliance. Given it’s granularity of dozens of weighted criteria, I expect it would be most valuable to improve writing that’s problematic in specific categories. When different writers assess the quality of different deliverables over time, I’m not sure if the grading is consistent enough and the one total score is indicative enough to track progress in a meaningful quantifiable way. However, I believe it could still show relative improvement.

I think it’s very much worth checking out AJ Emanuel’s slide rule, and it’s easy to test drive it:

  1. Download the tool from AJ’s website Comma Theory where you can also find additional information.
  2. If you want to, tweak the categories (for example, by comparing it with Gretchen Hargis’s qualities in her book Developing Quality Technical Information: A Handbook for Writers and Editors.)
  3. Quickly grade a (short) document in a legacy version which has since seen significant improvement and in the current version.
  4. Evaluate the scores and test them on colleagues or managers.

How to convince managers of topic-based authoring, part 2

To get managers behind a migration to topic-based authoring (TBA), focus on benefits and savings. This is the last post in a two-part series. Find the beginning and background in part 1.

I present the speaker notes and explanations instead of the actual slides which only contain the phrases in bold below.

Benefits and challenges for writers

Make documentation efficient. For technical writers, the structure within topics and across all topics makes writing topics more efficient because you spend less time stressing over what goes where and over layout.

Make documentation transparent. The structure of the topics collection as a whole makes content more transparent: It’s easier to spot a missing topic, if each setup procedure (how to set up stuff) is accompanied by an operating procedure (how to use what you’ve just set up) and by a concept topic (what is that stuff you’ll set up and operate). Thanks to their structure and smaller units, documentation efforts also become easier to estimate – though maybe more tedious to report on in their details.

Collaborate more easily. The structure also makes it easier and faster for writers to collaborate on writing, reviewing and editing each other’s topics, again, because it’s quickly obvious what belongs (or is still missing) where.

Assume new tasks and responsibilities. Challenges for writers are learning a whole new range of tasks and responsibilities, from “chunking” subjects into topics and making sure there is one (main) topic for each subject to interfacing nicely with the topics of colleagues to peer-editing other people’s topics. On the other hand, most writers no longer have to double as layouters and publishers, since that role is usually in the hands of a few people.

Migrating legacy content. Another challenge is, of course, to migrate all existing contents into topics. However, this is a one-time effort, while the benefits of clearly structured topics keep paying off.

Benefits and challenges for companies

Of course, the benefits and challenges for writers affect the company as a whole. But there are additional effects to the company owning topic-based documentation.

Leverage corporate content. Cleanly structured (and tagged) content in topics is much easier to leverage as part of a corporate content strategy. (Did I mention this was a presentation for managers? Hence the verb “to leverage”…) After all, there are other teams who may well hold stakes in some documentation topics or parts of them:

  • Product management or even Marketing may want to reuse parts of concept topics, such as use cases.
  • Training could reuse procedural topics.
  • Quickly searchable documentation can improve customer services – or any type of performance support your company may offer.

Make recruitment more efficient. Clearly structured, topic-based documentation will make it easier on a company to find and hire professional, qualified technical writers – and help new writers get up to speed faster.

Savings from topic-based authoring

Your mileage will vary, depending on your current deliverables, processes and tools. However, from the case studies I’ve seen around the web and at conferences, our numbers are not unusual. Savings are in hours for writers who apply topic-based authoring compared to their earlier efforts without TBA.

  • Writing Release Notes as usual – saving 0%
  • Writing Online Help, largely reusing Release Notes topics – saving 45-60%
  • Writing new User Manuals, by reusing some topics from Release Notes or Online Help – savings unknown
  • Updating existing User Manuals, by reusing Release Notes topics – saving 60-75%

Complementary information

To read more about measuring efforts and costs, see my previous posts about:

About topic-based authoring, I recommend these two books:

Your turn

Would these arguments convince your managers to support you in moving to topic-based authoring? What other arguments might it take? Should such an initiative to restructure documentation come from writers or managers? Please leave a comment.

Improve documentation with quality metrics

Quality metrics for technical communication are difficult, but necessary and effective.

They are difficult because you need to define quality standards and then measure compliance with them. They are necessary because they reflect the value add to customers (which quantitative metrics usually don’t). And they are effective because they are the only way to improve your documentation in a structured way in the long run.

Define quality standards

First, define what high quality documentation means to you. A good start is the book Developing Quality Technical Information: A Handbook for Writers and Editors from which I take these generic quality characteristics for documentation topics:

  • Is the topic task-oriented?
    Does it primarily reflect the user’s work environment and processes, and not primarily the product or its interface?
  • Is the topic up-to-date?
    Does it reflect the current version of the product or an older version?
  • Is the topic clear and consistent?
    Does it comply with your documentation style guide? If you don’t have one, consider starting from Microsoft’s Manual of Style for Technical Publications.
  • Is the topic accurate and sufficient?*
    Does it correctly and sufficiently describe a concept or instruct the customer to execute a task or describe reference information?
  • Is the topic well organised and well structured?*
    Does it follow an information model, if you have one, and does it link to relevant related topics?

* Measuring the last two characteristics requires at least basic understanding of topic-based authoring.

The seal of quality

You may have additional quality characteristics or different ones, depending on your industry, your customers’ expectations, etc. As you draft your definition, remember that someone will have to monitor all those characteristics for every single topic or chapter!

So I suggest you keep your quality characteristics specific enough to be measured, but still general enough so they apply to virtually every piece of your documentation. Five is probably the maximum number you can reasonably monitor.

Measure quality

The best time to measure quality is during the review process. So include your quality characteristics with your guidelines for reviewers.

If you’re lucky enough to have several reviewers for your contents, it’s usually sufficient to ask one of them to gauge quality. Choose the one who’s closest to your customers. For example, if you have a customer service rep and a developer review your topics, go with the former who’s more familiar with users’ tasks and needs.

To actually measure the quality of an online help topic or a chapter or section in a manual, ask the reviewer to use a simple 3-point scale for each of your quality characteristics:

  • 0 = Quality characteristic or topic is missing.
  • 1 = Quality characteristic is sort of there, but can obviously be improved.
  • 2 = Quality characteristic is fairly well developed.

Now, such metrics sound awfully loose: Quality “is sort of there” or “fairly well developed”…? I suggest this for sheer pragmatic purposes: Unless you have a small number of very disciplined writers and reviewers, quality metrics are not exact science.

The benefit of metrics is relative, not absolute. They help you to gauge the big picture and improvement over time. The point of such a loose 3-point scale is to keep it efficient and to avoid arguments and getting hung up on pseudo-exactitude.

Act on quality metrics

With your quality scores, you can determine

  • A score per help topic or user manual chapter
  • An average score per release or user manual
  • Progress per release or manual over time

Areas where scores lag behind or don’t improve over time give you a pretty clear idea about where you need to focus: You may simply need to revise a chapter. Or you may need to boost writer skills or add resources.

Remember that measuring quality during review leaves blind spots in areas where you neither write nor review. So consider doing a complete content inventory or quality assessment!

Learn more

There are several helpful resources out there:

  • The mother lode of documentation quality and metrics is the book Developing Quality Technical Information by Gretchen Hargis et al. with helpful appendixes, such as
    • Quality checklist
    • Who checks which quality characteristics?
    • Quality characteristics and elements
  • Five similar metrics, plus a cute duck, appear in Sarah O’Keefe’s blog post “Calculating document quality (QUACK)
  • Questionable vs. value-adding metrics are discussed in Donald LeVie’s article “Documentation Metrics: What Do You Really Want to Measure” which appeared in STC’s intercom magazine in December 2000.
  • A summary and checklist from Hargis’ book is Lori Fisher’s “Nine Quality Characteristics and a Process to Check for Them”**.
  • The quality metrics process is covered more thoroughly in “Quality Basics: What You Need to Know to Get Started”** by Jennifer Atkinson, et al.

** The last two articles are part of the STC Proceedings 2001 and used to be easily available via the EServer TC Library until the STC’s recent web site relaunch effectively eliminated access to years’ worth of resources. Watch this page to see if the STC decides to make them available again.

Your turn

What is your experience with quality metrics? Are they worth the extra effort over pure quantitative metrics (such as topics or pages produced per day)? Are they worth doing, even though they ignore actual customer feedback and demands as customer service reps can register? Please leave a comment.