2nd day of sessions at TCUK 13

The business and managing of tech comm was the predominant topic of my TCUK13 experience, as I reflect some more on the sessions I attended and the conversations I joined.

A. Westfold on collaborative authoring in DITA

Andrew presented a case study of McAfee over several years, from separate product teams and “artisanal”  lone writers to a larger, unified team of writers collaborating in DITA. During this time, McAfee also grew by acquisitions which meant that additional writers, methods and tools came on board. Here are the most essential stages of their journey:

  1. Improve several individual procedures for quick wins: Single sourcing reduced translation efforts. Automating the translation round-trip cut out costly manual layout efforts.
  2. Move to topic-based authoring: They chunked up content into topics and moved them into DITA to validate the topic structure. (It turned out that many task topics could not be automated and essentially had to be rewritten in valid structure.)
  3. Bring in a content management system to reap the full benefit from single sourcing and topic-based authoring. This helped to reduce the number of redundant topics and to make localization even more efficient.

While their journey is far from finished, McAfee has realized the following benefits so far:

  • Easier administration of topics than of larger content chunks before. It’s also easier to solicit reviews for smaller stand-alone chunks.
  • Faster, more consistent creation of deliverables for several product variants thanks to better use of standard templates.
  • Documentation processes align well with recently introduced agile development processes.
  • More efficient, streamlined workflow thanks to better integration between documentation and localization.

I really enjoyed Andrew’s presentation. It showed that projects to improve tech comm do work out, even if you don’t always see past the next stage, and you may have to adopt due to other changes in the company.

A. Warman on “Managing accessible mobile content”

Adrian Warman from IBM hooked up two important tech comm issues, accessibility and documentation for mobile, into a survey session.

Accessibility makes it easier for everyone to fit in, participate and contribute, irrespective of disabilities. In short, it ensures that a user’s disability does not mean a personal disadvantage. For tech comm, this means that sufficient documentation is accessible. For example, if your online help in HTML is accessible, it’s not necessary to make the same contents in PDF accessible as well – or vice versa, as the case may be. Adrian advised us to keep an eye on “EU mandate M 376” which may soon make some level of accessibility mandatory for products traded within the EU.

Mobile (smartphones and tablets) for tech comm means not just a technology, but an expectation, a mindset. It’s more than simply fitting our output onto smaller screens. Its different dimensions of interactivity, such as progressive disclosure and user-generated content, challenges us tech writers to re-think how to best convey an idea. Which is the best taxonomy that supports both, mobile devices and accessibility?

I don’t think there was a lot of new, revolutionary content here, but since I haven’t dealt much with either topic so far, it was a welcome introduction that was concise and well presented.

E. Smyda-Homa on useless assistance

Edward reported on his twitter project @uselessassist where he “Retweets to remind organizations of the frustration and negative emotions that result from poorly prepared assistance.” He presented many examples of poor user assistance. Some people complained about insufficient instructions, whether they had not enough images or only images. Some found the instructions too long (“I know how to prepare toast!”) or too short or redundant. Some pointed out typos or bad translations.

This was a very entertaining session – and you can easily get the gist of it by simply looking up the account or following the twitter feed. It’s anecdotal evidence in real-time that users actually do read the manual – or at least try to.

While every tweet is heartfelt, I think not every one merits a change in the documentation – if only because some are contradicting each other. But I find Edward’s project very enlightening and nodded to myself in embarrassed recognition a couple of times…

– Feel free to leave comments about any of the sessions, whether you have attended them or not.

Address information overload in tech comm

Addressing information overload helps your user assistance succeed when knowing your audience and offering correct and concise content is not enough. You can have great topics well-written and well-structured, if they’re part of an information deluge, they won’t help your users get their stuff done.

I take my cue from a post by Nathaniel Davis over at UX matters called “IA Strategy: Addressing the Signatures of Information Overload“. Nathaniel describes six such signatures. I think at least three of them have something to say why and how too much information fails in documentation, too.

1. Feedback

Feedback is the essential reality check to determine whether users suffer from information overload. Customers may report that they’re not sure they’ve found the right information or they cannot apply it efficiently. Even if the content is fine topic by topic, the bulk of information is unmanageable. In this case, consider improving search and browsability for more efficient use of the documentation.

2. The utility gap

The utility gap means that customers only use a small fraction of all the information they have at their disposal. As Nathaniel says, it’s what I have vs. what I use.

If certain user types experience utility gaps, consider addressing them with special documents. For example, you could assist novice users with a quick start document. Or address a special use case which only requires one of the many processes, plus some reference information. With topic-based authoring, it’s usually easy to create such additional documents by re-using the relevant topics. (Maybe add one or two specific “glue topics” to make sure the new document still flows nicely…)

If all users experience utility gaps, consider progressive disclosure by layering your content. The benefit of offering all content within three mouse-clicks wears off if it’s too much. Progressive disclosure structures content by providing the most essential, most frequently used topics first and more obscure information later. Make sure, however, that all topics remain searchable and findable!

3. Filter failure

Filter failure means that users lack ways to judge which information to trust and use. It’s what I can use vs. what I should use. Filter failure can be solved with tools and with content.

Customers who are confident to use their own judgment require tools to filter information. In documentation, faceted search allows users to reduce search results by categories to weed out inapplicable information.

Customers who prefer to rely on expert judgement will benefit from recommendations in the content itself. Consider adding such recommendations for certain user roles or use cases to guide customers to the most suitable information.

– Have you had symptoms of information overload in documentation? Would these strategies help users to cope? What other solutions are there? Feel free to leave a comment.

ROI of topic-based authoring and single sourcing

Breaking down content silos brings benefits and ROI to topic-based authoring, even if you have little or no translation. I’ve cut down time to write and maintain three deliverables by 30-40% by reusing topics.

Content silos

The company I work for supplies documentation for its software solution in different formats, among them:

  • Release notes inform customers about new features and enhancements in new versions.
  • User manuals describe individual modules of the product, how to set them up, how to operate them and what kind of results to expect from them.
  • Online help focuses on reference information for windows and fields, but has some overlaps with information in user manuals.

Content silos maintain separate contents per deliverable.Originally, these three deliverables were created and maintained in separate “content silos”, using separate tools and separate source repositories. So the documentation process looked like this:

  1. Write release notes in Word.
  2. Update or write user manuals in Word.
  3. Update the online help in a custom-built help tool that uses Word as an editor and Microsoft’s HTML Help Workshop to publish to Microsoft Compiled HTML Help (.CHM).

I’ve found that I could save some time by writing the release notes with the other deliverables in mind, so I could copy and paste content and reuse it elsewhere. For example, my release notes describe a new batch job which helps to automate a tedious workflow. If I describe the batch job in detail, the same content fits easily into the user manual. (Yes, it bloats the release notes, but at least the information is available at the release date, while we didn’t always manage to update the user manual in time.)

Copying and pasting worked even better once I structured the content in each of the three silos as topics. For example, a task topic from the release notes would fit almost gracefully among similar task topics in en existing manual.

But such manual copy-and-paste reuse is really not efficient or maintainable, because my stuff is still all over the place. I may write in – or copy to – four places, but then remember to update only two of them; enter inconsistency and its twin brother unreliability.

Getting real about reuse

To get the full benefits and ROI of topic-based authoring, we’ve found it’s not enough to simply write topics and keep your concepts separate from your tasks. We’ve had to adjust our documentation architecture, our tools and our process.

The guiding principle is: “Write once, publish many”. This tech comm mantra proved to be the key. We now aim to have each piece of information in only one topic. That unique topic is the place we update when the information changes. And that’s the topic we link to whenever a context requires that information.

Single sourcing is the best way to get a collection of unique topics into user manuals and online help. So we needed to consolidate our separate content silos into a single repository from which we can publish all our deliverables.

MadCap Flare is the tool we chose. It gives us a reliable, yet flexible way to maintain a common repository of topics. For each deliverable, such as release notes and user manuals in PDF and online web help, we simply create a new table of contents (TOC) which collects all topics that go into the deliverable.

With topic reuse, we define tables of contents to assemble topics per deliverable.

The writing process has changed considerably: Previously, I would focus on writing a release note entry or a chapter in a user manual. Now I find myself focusing on a specific task or concept and how to describe it as stand-alone content so it works for the user, whether it appears in a user manual or in the release notes.

The flexibilities of MadCap Flare’s conditions feature and of our DITA-based information model help us to accommodate the differences of our deliverables. We still write a few topics explicitly for a specific deliverable. For example, in release notes, short “glue” topics serve to introduce new concept topics and task topics to establish some context for the user and explain why a new feature is “cool”. In user manuals, an introductory chapter with a few topics explains what to find where and which sections to read for a quick start.

But most of the topics can now be used in release notes, user manuals and online help alike. Since I’ve gone from copy-and-paste in three content silos to single sourcing topics in Flare, the time to write and update documentation for my module has decreased by 30-40%. It’s on the lower end if a new version brings a lot of brand-new features. It’s higher if there are more enhancements of existing functionality.

Turning tech comm into a biz asset by Sarah O’Keefe

Turning technical communications into a business asset, according to Sarah O’Keefe, is mainly about justifying cost; it is necessary, but possible. Her session at tekom12 was part of the Content Strategy stream, presented as last year by Scott Abel.

How expensive is your documentation – really?

Much progress in a tech comm department gets stumped when we, the tech writers, say: “Ah, that’d be great – but they’ll never pay for it!” What that really means is: “‘They’ don’t see the value (or the urgency).” So to prove the value behind tech comm, we need to justify how we can either save money (by reducing effort) or how we can generate additional revenue (by producing value that exceeds our cost).

Sarah points out several way to do this:

  • Show how tech comm can address legal or regulatory issues. Avoiding lawsuits is a great way to save your employer’s money!
  • Control the real cost of tech comm, because “cheap can be very expensive”: Yes, you may get something akin to documentation from a secretary or an intern, but…
    • Is your documentation efficient to maintain?
    • Does it scale or allow publication in other formats?
    • Does it actually satisfy your customers and support your brand – or does it stab your corporate value statement in the back?

Cost containment strategies

Sarah mentioned several strategies to control documentation cost.

The first bunch has to do with efficient content development:

  • Reuse as much content as possible: Write once, use many times, either in different places of the same format or in different output formats.
  • Automate formatting: Manually handcrafted formatting of deliverables can be a huge cost factor. It’s not uncommon for tech writers to spend 20% of their time (and hence a sizable chunk of money) on formatting output. Automate this, by relegating format either to templates or CSS.
  • Localization scales content efficiencies: Localizing or translating your content will be all the more inefficient, the more inefficiencies you have in your original documentation processes. This applies to content reuse, inefficient content variants and formatting.

Then there’s cost reduction outside of the tech comm team, for example, in tech support:

  • Consider whether your documentation is good enough to deflect the maximum possible number of support calls. Anything that users cannot find in the documentation, whether it’s missing or unfindable, drives up costs for your tech support staff.
  • Ensure your tech support staff has access to your documentation in formats they can work with efficiently. Downloading and then opening a document of 10 or 20 MB, is not only clumsy in its own right, it’s also likely that it doesn’t present the required information in the most efficient way…
  • Ensure your documentation content is actually useful to tech support staff: It must not only be accurate, but also up-to-date. Consider the nightmare in terms of costs and maintenance if tech support spun off their own documentation to augment the “official documentation”. Instead, invite them to contribute to the documentation you create.

Make documentation more strategic

Then there are a few strategies to make documentation more strategic, or rather, more strategically valuable:

  • Ensure your documentation is not only searchable (so it’s captured by publicly accessible search engines), but also findable (so people know where and how to get to it) and discoverable (so people link to it, from blogs or forums or twitter or the like).
  • Align tech comm to larger business goals: Find a corporate goal, preferable one that is tied to revenue to be made or cost to be avoided and show: If the tech comm team did this, it could contribute approximately that much money (in savings or additional revenue) to that larger corporate goal.

Conclusion

Sarah’s talk was geared towards the strategic angle of tech comm, but succeeded in making valuable points very clearly. Whether you can actually apply her advice in your situation may depend on how much managers with budget control feel the pain of improving tech comm.

Scott Abel on Structured Content at TCUK12

Scott Abel delivered his keynote It’s All About Structure! Why Structured Content Is Increasingly Becoming A Necessity, Not An Option in his usual style: Provocative, but relevant, fun and fast-paced (though he said he was going to take it slow). He even channeled George Carlin’s routine on Stuff: “These are ‘MY Documents’, those are YOUR documents. Though I can see you were trying get to MY Documents…”

His style doesn’t translate well onto a web page, so I’ll restrict myself to his 9 reasons Why Structured Content Is Increasingly Becoming A Necessity:

  1. Structure formalizes content, so it can guide authors who need to make fewer decisions when writing it. It also guides readers who can find more easily where the relevant information is in the whole documentation structure or within a topic. And it guides computers which can extract relevant information automatically and reliably.
  2. Structure enhances usability by creating patterns that are easy to recognize and easy to navigate with confidence.
  3. Structure enables automatic delivery and syndication of content, for example, via twitter – and you’ll be surprised occasionally when and how other people syndicate your “stuff”.
  4. Structure supports single-sourcing which means you can efficiently publish content on several channels, whether it’s print or different online outputs, such as a web browser, an iPad or a smartphone.
  5. Structure can automate transactions, such as money transfers, whether they are embedded in other content or content items in their own right.
  6. Structure makes it easier to adapt content for localization and translation, because you can chunk content to re-use existing translations or to select parts that need not only be translated but localized to suit a local market.
  7. Structure allows you to select and present content dynamically. You can decide which content to offer on the fly and automatically, depending on user context, such as time and location.
  8. Structure allows you to move beyond persona-ized content. This is not a typo: Scott doesn’t really like personas. He thinks they are a poor approximation of someone who is not you which is no longer necessary. With structured content (and enough information about your users) you can personalize your content to suit them better than personas ever let you.
  9. Structure makes it much easier to filter and reuse content to suit particular variants, situations and users.

How to disrupt techcomm in your organization?

If you need to “disrupt” your tech comm content, I believe it’s more beneficial to integrate content across the organization than just to get tech comm to become more business-oriented or more like marketing.

The idea comes out of a worthy new collaborative project Sarah O’Keefe launched last week, Content Strategy 101: Transform Technical Content into a Business Asset. (This blog post is based on a couple of comments I’ve left on the site.)

Tech comm goes to business school

A recurring discussion is that tech comm needs to be more business-like to be justifiable in the future, not only on this blog but also elsewhere. Proponents of this view definitely have a point, if only because tech comm is often seen as a cost center and finds it hard to claim a return on investment.

I think, however, that this view is detrimental to all involved parties:

  • Tech comm risks to abandon its benefits to users and quality standards in an attempt to be “more like marketing”.
  • Managers may risk permanent damage to the documentation of their product without solving the bigger problem.

Breaking down all silos

The bigger problem often is that most content production is inefficient – because it occurs in parallel silos. Many companies have gotten good at making their core business more efficient. But they often neglect secondary production of content which remains inefficient and fragmented.

I’ve seen several companies where marketing, technical communications and training (to name just three areas) waste time and money. Due to inefficient, silo’ed processes, tools and objectives, they create similar, overlapping content:

  • Marketing and tech comm create and maintain separate content to explain the benefits of a product.
  • Tech comm and training write separate instruction procedures for manuals and training materials.

Once companies wake up to these redundancies, all content-producing units will face pressure to streamline content and make it easier to produce and reuse. This will revolutionize corporate content production and publishing.

Quo vadis, technical communicators?

I think this issue raises two questions for technical communicators.

The strategic question is:

Which kind of content disruption is more beneficial for the organisation and for customers: Folding tech comm into marketing or integrating all content with a corporate content strategy?

The answer depends on several issues, among them:

The tactical question is:

What’s the role of technical communicators in this content disruption: Are they the movers or the movees? Are they shaping the strategy or following suit?

The answer again depends on several issues:

  • What is your personality, clout and position in the organization?
  • Which team has the most mature content and processes to be a candidate to lead any kind of strategic change in content?

I think tech comm can lead a content strategy, especially if and when the tech comm team knows more about content than marketing or training or other content producers.

Linking topics: Cross-reference or relationship table?

Choose the appropriate reference type, cross-reference or relationship table, to link between topics so you and your readers get the most from your documentation.

When you’re moving from less-than-structured documentation to topic-based writing, one of the less apparent challenges is to link your related topics to one another. You could just keep on using cross-references, but then you’d miss out on some of the benefits of topics.

Whether you write topics using a standard like DITA or a tool such as MadCap Flare, you have a new cross-reference type, relationship tables. It is important to distinguish the two types, because each serves a unique purpose.

Cross-references

A cross-reference is the link you know from Word or other document-based writing: You create a link to a heading or a bookmark, it can show the heading title, and it updates automatically if that heading (or a page number) changes.

It leads readers from a certain point or condition to another place. It tells readers:

If you want to do or know that now, go over there.

So far, so good. This kind of link works well, if you have a document with an organised sequence. Occasionally, you need to offer the reader an occasional branching into two alternative secnarios or a jump to another place.

But when you convert your content into a pile of loosely connected topics, you have much more demand and more opportunities to relate topics.

Disadvantages of cross-references

Cross-references aren’t always the best way to relate topics because they:

  • Disrupt reading flow and orientation. For users, it’s fine to make the occasional choice between scenario A or B. But offering too many links will tempt many users to wonder what they’re missing at the other end. And following link after link from within one topic to the next quickly breaks the flow of tasks and leaves the user confused.
  • Create a web of dependencies. With cross-references, you tie your topics to one another in a certain preferred scenario you engineer. This scenario may not suit the user’s current need. It undermines the flexible “stand-alone” independence of topics that supports multiple use cases. And it makes it harder for writers to reuse them.

So while cross-references are easily a preferred way of linking contents in document-based writing, consider carefully how they will affect your use, and the users’ benefits, of your topics.

Cross-references in topics work well in these cases:

  • Link to mandatory pre-requisites or required next steps.
  • Link to a series of tasks in an overview/parent topic.

In either case, the user pretty much must follow the link to achieve anything useful, so cross-references are fine. Just make sure that the link and the surrounding text are meaningful, so users can decide whether they should follow the link.

– So if cross-references are not always recommended, how else can I link between topics?

Relationship tables

A relationship table is best to indicate that certain topics as a whole are related. It tells readers:

If you’re involved with this topic, you should also be aware of those topics.

For users, links from relationship tables appear separately, usually below the actual topic text in a section of related links or “see also”, depending on how you choose to style them.

For writers, under the hood, a relationship table is a separate file that lists by type which topics are related to one another. For example, I could have a table like this:

Concept topics Task topics Reference topics
Income taxes Calculate income taxes Income tax deadlines
File income taxes Addresses of tax offices

This means that the topics are related as a whole. And they will remain related, even if you update one of them by adding, changing or deleting a paragraph.

This is a pretty new concept if you’re used to writing long single documents. And it might feel awkward to have references outside and removed from the linked topics.

Advantages of relationship tables

Once you wrap your head around the idea, relationship tables have several advantages:

  • Keep your topic text flexible. With such a table, you don’t lock your topic into a certain scenario as a cross-reference does. A cross-reference establishes a fixed connection – which might be irrelevant or not even available for certain users or product versions. It’s much easier to drop a topic in a relationship table where it will not appear if it doesn’t exist for certain users or products.
  • Keep your references complete and up-to-date. With tables, it’s much easier to oversee the complete set of links and relationships than with cross-references inside topics. If you’ve ever tried to manually update and rephrase links to a new important topic which has replaced an obsolete topic in countless places, you will appreciate a table where you can simply add or omit any one topic.

Relationship tables are not superior to cross-references. They simply serve a different purpose. I hope this post helps you to appreciate the benefits of either type and to decide when to use which. Please leave a comment to let me know if I’ve succeeded or have been wrong or unclear somehow.

For more about converting documentation to topics, see previous posts about: