“Statistics without maths” workshop at #TCUK11

Technical Communication UK 2011 is off to good start with around 100 people attending six pre-conference half-day workshops on Tuesday. Even the night before saw about 20 attendees joining the organisers to help with last-minute setup chores, not to mention drinks and dinner.

On Tuesday afternoon, I attended the workshop “Statistics without maths: acquiring, visualising and interpreting your data” by Mike K. Smith, Chris Atherton and Karen Mardahl.

Mike K. Smith encourages us to insist on hard evidence

The workshop was virtually free of math in terms of formulas and calculations. Nonetheless, its introduction of concepts such as different average measurements mean vs. median vs. mode, or such as standard deviation vs. standard error challenged tech communicators. Personally, I’m more familiar with the finer points of language, not mathematical concepts, so it was a bit of a stretch for me.

The focus, however, was on general principles that give well-done statistics the power to infer a greater whole from representative data:

  • Strength of evidence, meaning the amount of data is large enough
  • Quality of data, meaning the data is good and useful to answer the question

A simple example illustrated these points:

1. Survey a group of people whether they like Revels, a British candy that comes with different fillings and hence different flavours, in general.

2. Hand out one Revel each to a smaller group of people and ask them how many liked the specific Revel they were given.

Frequently, the results of #2 are interpreted to mean #1. And that’s not even taking into consideration the alternative suggested by the workshop audience:

3. Watch a smaller group eat Revels (best without their knowing that they’re being watched) and draw your your conclusions how many really like Revels.

Another principle that was presented and discussed was that correlation measured by studies and statistics is not the same as causation: Two things that frequently or always occur together don’t mean that one causes the other. They could both be caused by a third overarching force. Or maybe there’s no causal relation between them at all…

The workshop about these concepts with dozens of examples also showed up a few cultural differences: Statisticians seem to strive for accuracy and precision to the point of not quite intelligible anymore, at least not outside their peer group.

I think some of the finer points about the definitions of averages and standard measurements (see above) were lost on some of us tech comm’ers. Still, the general message resonated with many: Statistics deserve close scrutiny, for the numbers they present, for the conditions in which they were measured and for the questions they seek to answer.

As Mike Smith put it towards the end:

What do we want?
Evidence-based change!
When do we want it?
After peer review!

Advertisement

Alice Jane Emanuel’s “Tech Author Side Rule” at #TCUK11

Technical Communication UK 2011 is off to good start with around 100 people attending six pre-conference half-day workshops on Tuesday. Even the night before saw about 20 attendees joining the organisers to help with last-minute setup chores, not to mention drinks and dinner.

On Tuesday morning, I attended Alice Jane Emanuel‘s workshop “The Tech Author Slide Rule: Measuring and improving documentation quality“. In a lively and engaging session, “AJ” taught us how to use the slide rule she came up with. It is actually an Excel spreadsheet that helps you measure qualities such as structure, navigation, language, and task orientation. You weigh a good 30 or so of such qualities in documentation, depending how important they are to you. Then you can grade a document (or a collection of topics after optional tweaking) by assigning points for each quality. The sheet sums up the weighed points per category and also for a total score.

AJ Emanuel, with David Farbey, before explaining the Tech Author Slide Rule

While the sheet is excellent to track progress over time, you can see results very quickly by comparing your current documentation with legacy deliverables. The quantified approach offers a range of benefits that are otherwise hard to come by for tech writers:

  • The numbered scores appeal to managers and make to easier for writers to show progress and accountability.
  • The standardized categories can help you to build a team by ensuring that everyone focuses on the same qualities and by pointing to problems where individual documents go off the rails. They also help to train new writers.
  • In general, it helps to raise the profile of technical communication by clarifying its contribution and giving everyone in the organization more specific terms and numbers to discuss.

AJ emphasized that you need to keep the tool’s categories and usage consistent: It’s fine to change or add categories, weights and ranges of available weights and scores, but remember that you jeopardize comparability of results when you do. It may be fine to add a handicap for special cases, but in general, beware of grade inflation and keep your grading consistent.

I think the tool is a great addition to any peer review/editing process when fellow tech writers assess style guide compliance. Given it’s granularity of dozens of weighted criteria, I expect it would be most valuable to improve writing that’s problematic in specific categories. When different writers assess the quality of different deliverables over time, I’m not sure if the grading is consistent enough and the one total score is indicative enough to track progress in a meaningful quantifiable way. However, I believe it could still show relative improvement.

I think it’s very much worth checking out AJ Emanuel’s slide rule, and it’s easy to test drive it:

  1. Download the tool from AJ’s website Comma Theory where you can also find additional information.
  2. If you want to, tweak the categories (for example, by comparing it with Gretchen Hargis’s qualities in her book Developing Quality Technical Information: A Handbook for Writers and Editors.)
  3. Quickly grade a (short) document in a legacy version which has since seen significant improvement and in the current version.
  4. Evaluate the scores and test them on colleagues or managers.

How to convince managers of topic-based authoring, part 2

To get managers behind a migration to topic-based authoring (TBA), focus on benefits and savings. This is the last post in a two-part series. Find the beginning and background in part 1.

I present the speaker notes and explanations instead of the actual slides which only contain the phrases in bold below.

Benefits and challenges for writers

Make documentation efficient. For technical writers, the structure within topics and across all topics makes writing topics more efficient because you spend less time stressing over what goes where and over layout.

Make documentation transparent. The structure of the topics collection as a whole makes content more transparent: It’s easier to spot a missing topic, if each setup procedure (how to set up stuff) is accompanied by an operating procedure (how to use what you’ve just set up) and by a concept topic (what is that stuff you’ll set up and operate). Thanks to their structure and smaller units, documentation efforts also become easier to estimate – though maybe more tedious to report on in their details.

Collaborate more easily. The structure also makes it easier and faster for writers to collaborate on writing, reviewing and editing each other’s topics, again, because it’s quickly obvious what belongs (or is still missing) where.

Assume new tasks and responsibilities. Challenges for writers are learning a whole new range of tasks and responsibilities, from “chunking” subjects into topics and making sure there is one (main) topic for each subject to interfacing nicely with the topics of colleagues to peer-editing other people’s topics. On the other hand, most writers no longer have to double as layouters and publishers, since that role is usually in the hands of a few people.

Migrating legacy content. Another challenge is, of course, to migrate all existing contents into topics. However, this is a one-time effort, while the benefits of clearly structured topics keep paying off.

Benefits and challenges for companies

Of course, the benefits and challenges for writers affect the company as a whole. But there are additional effects to the company owning topic-based documentation.

Leverage corporate content. Cleanly structured (and tagged) content in topics is much easier to leverage as part of a corporate content strategy. (Did I mention this was a presentation for managers? Hence the verb “to leverage”…) After all, there are other teams who may well hold stakes in some documentation topics or parts of them:

  • Product management or even Marketing may want to reuse parts of concept topics, such as use cases.
  • Training could reuse procedural topics.
  • Quickly searchable documentation can improve customer services – or any type of performance support your company may offer.

Make recruitment more efficient. Clearly structured, topic-based documentation will make it easier on a company to find and hire professional, qualified technical writers – and help new writers get up to speed faster.

Savings from topic-based authoring

Your mileage will vary, depending on your current deliverables, processes and tools. However, from the case studies I’ve seen around the web and at conferences, our numbers are not unusual. Savings are in hours for writers who apply topic-based authoring compared to their earlier efforts without TBA.

  • Writing Release Notes as usual – saving 0%
  • Writing Online Help, largely reusing Release Notes topics – saving 45-60%
  • Writing new User Manuals, by reusing some topics from Release Notes or Online Help – savings unknown
  • Updating existing User Manuals, by reusing Release Notes topics – saving 60-75%

Complementary information

To read more about measuring efforts and costs, see my previous posts about:

About topic-based authoring, I recommend these two books:

Your turn

Would these arguments convince your managers to support you in moving to topic-based authoring? What other arguments might it take? Should such an initiative to restructure documentation come from writers or managers? Please leave a comment.

Improve documentation with quality metrics

Quality metrics for technical communication are difficult, but necessary and effective.

They are difficult because you need to define quality standards and then measure compliance with them. They are necessary because they reflect the value add to customers (which quantitative metrics usually don’t). And they are effective because they are the only way to improve your documentation in a structured way in the long run.

Define quality standards

First, define what high quality documentation means to you. A good start is the book Developing Quality Technical Information: A Handbook for Writers and Editors from which I take these generic quality characteristics for documentation topics:

  • Is the topic task-oriented?
    Does it primarily reflect the user’s work environment and processes, and not primarily the product or its interface?
  • Is the topic up-to-date?
    Does it reflect the current version of the product or an older version?
  • Is the topic clear and consistent?
    Does it comply with your documentation style guide? If you don’t have one, consider starting from Microsoft’s Manual of Style for Technical Publications.
  • Is the topic accurate and sufficient?*
    Does it correctly and sufficiently describe a concept or instruct the customer to execute a task or describe reference information?
  • Is the topic well organised and well structured?*
    Does it follow an information model, if you have one, and does it link to relevant related topics?

* Measuring the last two characteristics requires at least basic understanding of topic-based authoring.

The seal of quality

You may have additional quality characteristics or different ones, depending on your industry, your customers’ expectations, etc. As you draft your definition, remember that someone will have to monitor all those characteristics for every single topic or chapter!

So I suggest you keep your quality characteristics specific enough to be measured, but still general enough so they apply to virtually every piece of your documentation. Five is probably the maximum number you can reasonably monitor.

Measure quality

The best time to measure quality is during the review process. So include your quality characteristics with your guidelines for reviewers.

If you’re lucky enough to have several reviewers for your contents, it’s usually sufficient to ask one of them to gauge quality. Choose the one who’s closest to your customers. For example, if you have a customer service rep and a developer review your topics, go with the former who’s more familiar with users’ tasks and needs.

To actually measure the quality of an online help topic or a chapter or section in a manual, ask the reviewer to use a simple 3-point scale for each of your quality characteristics:

  • 0 = Quality characteristic or topic is missing.
  • 1 = Quality characteristic is sort of there, but can obviously be improved.
  • 2 = Quality characteristic is fairly well developed.

Now, such metrics sound awfully loose: Quality “is sort of there” or “fairly well developed”…? I suggest this for sheer pragmatic purposes: Unless you have a small number of very disciplined writers and reviewers, quality metrics are not exact science.

The benefit of metrics is relative, not absolute. They help you to gauge the big picture and improvement over time. The point of such a loose 3-point scale is to keep it efficient and to avoid arguments and getting hung up on pseudo-exactitude.

Act on quality metrics

With your quality scores, you can determine

  • A score per help topic or user manual chapter
  • An average score per release or user manual
  • Progress per release or manual over time

Areas where scores lag behind or don’t improve over time give you a pretty clear idea about where you need to focus: You may simply need to revise a chapter. Or you may need to boost writer skills or add resources.

Remember that measuring quality during review leaves blind spots in areas where you neither write nor review. So consider doing a complete content inventory or quality assessment!

Learn more

There are several helpful resources out there:

  • The mother lode of documentation quality and metrics is the book Developing Quality Technical Information by Gretchen Hargis et al. with helpful appendixes, such as
    • Quality checklist
    • Who checks which quality characteristics?
    • Quality characteristics and elements
  • Five similar metrics, plus a cute duck, appear in Sarah O’Keefe’s blog post “Calculating document quality (QUACK)
  • Questionable vs. value-adding metrics are discussed in Donald LeVie’s article “Documentation Metrics: What Do You Really Want to Measure” which appeared in STC’s intercom magazine in December 2000.
  • A summary and checklist from Hargis’ book is Lori Fisher’s “Nine Quality Characteristics and a Process to Check for Them”**.
  • The quality metrics process is covered more thoroughly in “Quality Basics: What You Need to Know to Get Started”** by Jennifer Atkinson, et al.

** The last two articles are part of the STC Proceedings 2001 and used to be easily available via the EServer TC Library until the STC’s recent web site relaunch effectively eliminated access to years’ worth of resources. Watch this page to see if the STC decides to make them available again.

Your turn

What is your experience with quality metrics? Are they worth the extra effort over pure quantitative metrics (such as topics or pages produced per day)? Are they worth doing, even though they ignore actual customer feedback and demands as customer service reps can register? Please leave a comment.

Getting ahead as a lone author, the article

“Getting ahead as a lone author”, based on my presentation in last September’s TCUK conference, appeared as a 3.5-page article in the current Winter 2010 issue of ISTC’s Communicator.

Click the cover to download the article in PDF.

Click to download the article in PDF.

I’ve covered lone authors over the last months in blog posts and in my presentation, after which Katherine Judge, commissioning editor of ISTC’s quarterly, asked me to write it up as an article which I share with you today.

It’s a concise summary of my talk, along these headings:

  • Overcome benign neglect
  • Buy yourself time
    • Implement topic-based authoring
    • Don’t test when you should be documenting
    • Learn to say ‘later’ and ‘no’
    • Control interruptions
  • Treat documentation as a business
    • Make documentation an asset
    • Estimate documentation effort
    • Plan documentation properly
    • Embrace reporting and metrics

2011 megatrend in technical communications

I think this year’s megatrend for technical communicators and their managers, especially employed ones, is to position tech comm as a business in its own right – or to be redundant in the long run.

This is my conclusion after thinking about three astute predictions that Sarah O’Keefe recently blogged about.

– I know: I’m late to the predictions party. And I’m actually not very good at crystal ball gazing. I’m much better at reconfiguring what I find. So my contributions are comments and some additional reasons why I think Sarah’s right.

Three sides of the same coin

If you’ve read Sarah’s post, I’ll just remind you of the headings of her predictions:

  • A schism in tech comm (traditional vs. modern tech comm)
  • The age of accountability
  • Increased focus on business value

If that doesn’t ring a bell, head on over and read her post, I’ll wait… 🙂

I think Sarah’s predictions are really three sides (?) of the same coin. And I’d be surprised to see a documentation team experience only one of them.

Business value

The lackluster attitude about documentation of “No one reads it, but you gotta have it” has been widely replaced by close scrutiny of its value add and ROI. I’ve recently seen a doc team’s initiative that had to present the same business case, including cost saved and break even, as any other internal initiative that wanted to spend some money. But more is at stake for us writers than playing the numbers game with managers and bean counters.

The question is how the tech comm team is perceived: As a cost center or as contributing to the corporate assets. The latter is of course more desirable and can only succeed when we break down departmental silos, when collaborate with other teams and become user advocates, see my earlier comment on Scriptorium’s blog.

Now take a step back and think of what that cost vs. asset question means to your job and your career outlook. To me, it’s awfully close to being seen as part of the problem or part of the solution…

Another reason why I think tech writers do well to consider and promote their business value is…

Accountability

Sarah’s second prediction follows directly from attention to business value: Once a company expects ROI from documentation, it will want to monitor the output. And that means to hold the documentation team accountable, not by measuring the quantity of produced stuff, but by measuring the quality of useful assets that have been efficiently produced. (It’ s worth keeping in mind the difference between accountability and responsibility; link courtesy of Jurgen Appelo and his presentation on authority and delegation.)

In the metrics, you may have some leverage: If you’ve ever tried it, you’ll find it’s awfully hard to come up with reliable metrics for documentation quality. The good news is that your managers will usually find it even harder. That’s a chance for you to apply some “Top strategies to embrace cost metrics” .

If you’re alert and on top of your game, you’ll find you have some agency in how you’re measured. It won’t always be your choice alone, but to a certain extent, you can choose sides in…

The schism in tech communication

The distinction looks crude, but I’ve found that many technical writers fall into one of the two camps that Sarah has identified:

  • “Traditional tech writers” who produce communication deliverables, such as user manuals and online help.
  • “Modern tech communicators” who provide user assistance services as part of the customer experience.

Note that this distinction has nothing to do with quality! I know very diligent, highly qualified people in both groups, and I’ve seen sloppy work in conventional manuals and modern screencasts.

I believe how that schism plays out for each writer in a team has a lot to do with the accountability of the documentation team, the responsibility of the team members and the dynamics between the members: Ideally, both types complement each other – and can show management that they are strong and agile because of their complementary strengths.

Now what?

Okay, so treating your documentation as a business before everybody else does sounds reasonable. For specific next steps, may I recommend the slides from my TCUK presentation “Getting ahead as a lone writer” and my other blog posts for lone writers. Even if you’re not a lone writer, you’ll find many ideas also apply to documentation teams.

Your turn

What do you think? Are these trends part of a larger movement to economize and commodify technical writing? Or is it nothing new, not worth beating a dead horse over? Please leave a comment.

Learn about DITA in a couple of hours

DITA 101, second edition, by Ann Rockley and others is one of the best tool-independent books about DITA. It’s a good primer to learn about DITA in a couple of hours.

Strong context

The book excels in firmly embedding DITA’s technologies and workflows in the larger contect of structured writing and topic-based authoring.

DITA 101, 2nd edition, cover I attribute this to the authors’ years of solid experience in these areas which comes through, especially in the earlier chapters.

“The value of structure in content,” the second chapter, illustrates structured writing with the obvious example of cooking recipes. Then it goes on to show you how to deduce a common structure from three realistically different recipes – which I hadn’t seen done in such a clear and concise way.

“Reuse: Today’s best practice,” the third chapter, takes a high-level perspective. First it acknowledges organizational habits and beliefs that often prevent reuse. Then it presents good business reasons and ROI measures that show why reuse makes sense.

Comprehensive, solid coverage

From the fourth chapter on, Rockley and her co-authors describe DITA and its elements very well from various angles:

  • “Topics and maps – the basic building blocks of DITA” expands on the DITA specification with clear comments and helpful examples.
  • “A day in the life of a DITA author” is very valuable for writers who are part of a DITA project. Writing DITA topics and maps is fundamentally different from writing manuals, and this chapter highlights the essential changes in the authoring workflow.
  • “Planning for DITA” outlines the elements and roles in a DITA implementation project for the project manager. Don’t let the rather brief discussion fool you: Without analyzing content and reuse opportunities, without a content strategy and without covering all the project roles, you expose your DITA project to unnecessary risk.
  • “Calculating ROI for your DITA project” has been added for the second edition. It’s by co-author Mark Lewis, based on his earlier white papers: “DITA Metrics: Cost Metrics” and “DITA Metrics: Similarities and Savings for Conrefs and Translation“. It expands on the ROI discussion of chapter 3 and creates minor inconsistencies that weren’t eliminated in the editing process.
  • “Metadata” first introduces the topic and its benefits in general and at length. Then it describes the types and usefulness of metadata in DITA. This might seem a little pedestrian, but it’s actually helpful for more conventional writers and for managers. It ensures they fully understand this part of DITA which drives much of its efficiencies and workflows.
  • “DITA and technology” explains elements and features to consider when you select a DITA tool, content management system or publishing system. This always tricky to do in a book as much depends on your processes, organization and budget. While the chapter cannot substitute for good consulting, it manages to point out what you get yourself into and what to look out for.
  • “The advanced stuff” and “What’s new in DITA 1.2” continue the helpful elucidation of the DITA specification with comments and examples that was begun in the “Topics and maps” chapter.

Mediocre organization

For all its useful contents, the book deserves better, clearer organization!

  • Redundancies and minor inconsistencies occur as concepts are defined and discussed in several places. For example, topics are defined on pages 4, 24 and 46. The newly added ROI chapter complements the ROI points in the third chapter, but neither has cross-references to the other.
  • The index doesn’t always help you to connect all the occurrences and navigate the text.
  • Chapters are not numbered, yet numbering of figures in each chapter starts at 1. It’s not a big problem, because references to figures always refer to the “nearest” number, it’s just irritating.

Formal errors

The book contains several errors which add to the impression of poor production values. They don’t hurt the overall message or comprehensibility, but they are annyoing anyway:

  • Mixed up illustrations such as the properties box in Word (page 72) vs. the properties box from the File Manager (73)
  • Spelling errors such as “somtimes” (1) and “execeptions” (16)
  • Problems with articles such as “a author” (20) and or a system that “has ability to read this metadata” (77)
  • Common language mistakes such “its” instead of “it’s” (52)

Lack of competition

Another reason why it’s still one of the best books on the topic is that there simply aren’t many others!

  • Practical DITA by Julio Vazquez is the only serious contender, and its practical, in-the-trenches advice complements Rockley’s book very well.
  • [More books are pointed out in the comments, thanks everybody! – Added January 11, 2010.]
  • DITA Open Toolkit by “editors” Lambert M. Surhone, Mariam T. Tennoe, Susan F. Henssonow is a compilation of Wikipedia articles. Amazon reviewers call other titles produced by the same editing team and publisher a scam.

Of course, several other honorable and worthwhile books include articles or chapters on DITA and/or discuss DITA in context of specific tools.

My recommendation

Despite its shortcomings, the book’s own claim is valid: “If you’re in the process of implementing DITA, expect to do so in the future, or just want to learn more about it without having to wade through technical specifications, this is the book for you.”

I recommend that you read it if you are

  • Involved in a project to implement DITA
  • Writing or translating documentation in a DITA environment
  • Managing technical writers

Your turn

Have you read this book? What’s your opinion? Can you recommend other books or resources about DITA? Feel free to leave a comment!

How efficient is your documentation?

To gauge the efficiency of your documentation, consider the time spent to create it plus the time it takes to use it.

That’s the lesson I learned from applying Scott Berkun’s make vs. consume ratio to documentation. Scott’s idea is generally that it takes time A to create a tweet or a poem, a book or a movie, and time B to read or watch it. Scott relates the two measures and points out how you can easily consume in a few hours what authors and publishers, actors and movie people have spent months fabricating.

“make + consume” in documentation

When it comes to documentation, I think you can add both measures to gauge efficiency of documentation – though not its coverage or quality!

But just for time, I try to keep in mind these tactics:

  • Minimize the total time required for you to create your documentation and for customers to find, use and apply it.
  • Consider spending more time to make your documentation faster and easier to use, especially if you find that customers have trouble with it.
  • Consider spending less time with documentation tasks that do not help your customers in using the described product.

Of course, time isn’t the only yardstick. Accuracy, completeness, legal and contractual obligations are just some of the other factors.

Still, I’ve found “make+consume time” a useful benchmark to stay focused on what ultimately benefits the user and what doesn’t.

Further reading

If you’re concerned about documentation efficiency, you might also find earlier posts of interest:

Your turn

How do you gauge the efficiency of your documentation process and output? Can you credit your efforts towards making your documentation faster and easier to use? Please leave a comment.

Top 10 reasons for tech writers and trainers to collaborate

Technical writers can and should collaborate with trainers to offer customers a unified and cost-effective learning experience. Here are eight specific reasons why they should collaborate – and two why they cannot afford not to do it:

  1. Same goal: Ensure that customers can set up and operate the product efficiently, effectively and confidently.
  2. Same audience:  Customers, more specifically users of the product (who, in a corporate setting, may have made the decision to use it or not).
  3. Same demands by that audience: Fill a knowledge gap, whether it’s large or small, conceptual or practical.
  4. Similar deliverables: Conceptual and instructional/procedural information, possibly in different formats, such as training slides or handouts, user manuals or online help.
  5. Cost-effective deliverables: Share text and images, use cases and examples.
  6. Better coverage: Writers and trainers see the product and its users from different angle and can help avoid professional myopia.
  7. Beneficial reviews: Writers and trainers who review each others work also learn about their own deliverables.
  8. Satisfied customers: A unified learning experience increases user confidence, satisfaction with and trust in the product.

Companies where writers do not collaborate with trainers run a considerable risk:

  1. Confused customers: Incoherent or even contradicting messages in documentation and training materials confuse and alienate users.
  2. Lost business, potentially in three ways:
    • Bad reputation and bad impressions keep prospects from buying.
    • Bad learning experience keeps customers from continuing or returning to the product.
    • If you’re really big, external companies can take a bite out of your training or manual business.  This is harder to replicate and hence less likely if you offer one seamless learning experience.

Your turn

Have you considered or tried to collaborate with training? Has it been worthwhile? Can this be a first practical step towards content strategy? Please leave a comment.

Top 3 success factors in online help systems

Service speed as well as content’s structure and spacing are the top 3 factors that determine whether your online help system is successful. That’s the gist when I apply 7 of Cameron Chapman’s “10 Usability Tips Based on Research Studies” to online documentation.

Cameron’s post of September 15 looks at the numbers behind the usability of web sites and, as she writes, “some might surprise you and change your outlook on your current design processes”. And they underscore the importance of offering documentation that’s quick to find, understand and apply.

Speed

Speed is essential for online help success in two ways.

Write help content so it is fast and easy to skim and understand. Cameron mentions two studies by Jakob Nielsen:

  1. Users read only about 28% of the text on a web site, and the ratio decreases with the amount of text.
  2. Users follow an F-shaped pattern when skimming web sites. They start reading at the top left corner (in cultures which read left to right, top to bottom), skim key words along the line and move down the lines in that pattern.

To optimize your online help for such behavior, you can:

  • Use headings, bullet lists and parallelism to ensure that users read the “right” 28% of the text. These are the parts which orient readers and guide them to the solution of their question, and then the solution itself (and hopefully they read more than 28% of that page…)
  • Front-load your headings, list entries and paragraphs so readers get the gist from the beginning.  This quickly guides your readers and helps them in their F-shaped survey of your contents.

Power your server so  it is fast to load and display the online help. Cameron refers to a study for the Bing search engine which shows significant decreases in clicks and user satisfaction once load times exceed two seconds. I assume that online help servers meet with similar impatience: Just like a search engine, they are intermediary services which users consult when they really want to do something else.

So measure and ensure that your online help web server can offer users short loading times. This is especially crucial in multi-step rendering processes of dynamic content which involve first a database and then on-the-fly rendering in HTML + CSS.

Space

The spatial design of information is the second essential factor in the success of your online help.

Use white space to improve readability and reading comprehension. A study at Wichita State University found that users prefer text on web pages with margins and optimal leading (= vertical spacing between text lines). They also retain better what they have read.

“Don’t worry about ‘the fold’”, says Cameron. Contrary to popular belief, users do scroll and read below the ‘fold’ of the initially visible top part of a web page. Cameron points to studies by a web analytics company and design agency which conclude that there is no correlation between page length and the number of readers who scroll at least 90% to the bottom. Instead, users apparently scroll when they think it’s worth scrolling – which again emphasizes the content, its readability and usability.

Structure

The structure of topics and contents in your online help is the third essential factor.

Navigation beats search. Cameron cites two studies that found users prefer navigation and usually resort to search only after the navigation failed them. (I assume this differs for experienced users who know what they can expect from navigation and search respectively.)

So do your technical communications team and your users a favor and maintain a solid topic structure that writers and readers find worthwhile to use. A good topic structure is a map that orients users throughout the system and in context. By contrast, a search result merely shines a spotlight on the topic a reader may or may not need. In short: Don’t let your search bail out poor structure and bad navigation.

Related topics