Structured topics, taxonomies & lightning at #STC14

More than 600 technical communicators met for the annual STC Summit in Phoenix, AZ, to demonstrate and expand the many ways in which they add value for users, clients, and employers. In a series of posts, I describe my personal Summit highlights and insights that resonated with me.

The journey to structured topics

I’ve framed my presentation “From Unstructured Documentation to Structured Topics” as a journey to a fjord precipice: Daunting, but nothing you cannot achieve with a some planning and a little bit of confidence.

Concluding slide for my presentation on structured topics

The summary outline during Q&A, photo by @dccd.

In this “project walk-through mini-workshop”, I outlined how we can combine core tech comm proficiency, such as topic-based authoring, with content strategy and project management skills to master the migration to structured topics. The applied skills and the resulting content architecture can be a solid foundation for a full-blown future corporate content strategy that highlights technical communicators and their skills.

The engaged Q&A afterwards showed that the ideas resonated with the 80+ attendees. Many technical communicators are comfortable and well qualified to expand their topic-writing skills into information architectures and content modelling.

The trip to taxonomy

In her session “How to Create and Use a Functional Taxonomy“, Mollye Barrett told of a similar challenge: She was originally brought in to create the documentation for a highly customised implementation of financial software. When it became apparent that not just the software needed documentation, but also the workflows and processes which it was supposed to support, she wound up creating a taxonomy!

As she laid out her case study, Mollye showed how technical communicators’ core skills of task analysis and task-oriented documentation qualify them to create a taxonomy of business functions that maps a software’s functions to specific user tasks.

The project essentially consisted of explicating the company’s multi-faceted tacit knowledge and connecting all the pieces:

  • Create a consistent terminology by defining the standard financial terms in use.
  • Describe and classify the various functions of the software.
  • Identify and describe the users tasks which need documentation.

Mollye studied disparate, unstructured legacy documents, examined the software, and worked with specialists from the business and IT sides. Her main driver was her persistence to eliminate ambiguity, her goal to define clear terms – or put more simply: to create order out of chaos.

Lightning strikes twice

A popular staple of the STC Summit is the two lightning talk rounds, moderated with understated wit by Rhyne Armstrong.

Liz Herman drove forward the multi-skilled tech comm theme with multiple costume changes in her talk “Perfecting the Hat Trick, Why My Hair’s Messy“. She demonstrated how tech comm’ers don the hats, caps, and helmets of sailors, fire fighters, cowboys, football players, the Irish, something I’ve forgotten and many more in just five minutes:

Liz Herman wearing different hats

Liz Herman dons diferent hats, photo by @dccd.

And Viqui Dill showed us how to use social media right in “Social Media is not the Devil“, her rousing karaoke performance to the tune of Charlie Daniels’ “The Devil Went Down to Georgia”:

Viqui Dill's karaoke lightning talk

Photo by @marciarjohnston.

 

Preview my STC14 session about structured topics

If you are curious about moving from unstructured documentation to structured topics – or if you cannot decide whether my session at the STC Summit next week is for you – here are the slides, maybe you find them helpful:

Moving to topics? Join me at STC Summit!

If you’re moving to topic-based authoring (or considering the move), join me next week at the STC Summit in Phoenix for my presentation “From Unstructured Documentation to Structured Topics“.

The format will be a “project walk-through mini-workshop” in a regular session slot of 45 minutes. That means you won’t get a detailed project plan or silver bullet for a successful migration to topics. But you will get plenty of information about the involved methods, options, and risks. Most importantly, you will get a chance to improve your confidence – and hence your chances for success – for such an important project!

Here’s the abstract:

You’re sold on the benefits of structured content, but don’t know how to begin? This session shows you how to implement topic-based authoring by converting existing unstructured documentation into structured topics, even in regular office software such as Word.

The underlying process works for online help, user manuals, but also other content, such as wiki articles, training materials, etc., as long as you know which deliverables you need to create and their approximate purpose.

There are several stages to the process:

  1. Identify topic type or types per content section, for example, concept, task, reference, or use case. Content which mixes topic types can be sorted out with a little care.
  2. Re-chunk your sections to turn them into stand-alone topics. You can delete redundant or obsolete information which does not belong into a topic. Or you can spin it off into a topic of its own or integrate it with another, more suitable topic. Special strategies help you to deal with topics that are too complex.
  3. Re-sequence your topics, so they flow nicely when users read not just one or two of them, but need to follow a complete process. If the topic sequence doesn’t flow nicely, you may need to add some auxiliary topics which orient readers and ensure a good flow.
  4. Rewrite headings to guide readers to give users enough orientation when they read just one or two topics. Rephrase them so users can quickly dip in and out of your documentation.
  5. Add links between related topics to ensure that the structured topics work in various use cases, even if users refer only to few topics.

This presentation emphasizes practical tasks; you will

  • How and why to create a content model
  • How to identify topic types in existing content
  • How to re-chunk content into true topics
  • How to sequence your topics
  • How and why to write good headings for your topics
  • How to link related topics

We’ll meet on Monday, 19 May at 9:45 in 106 BC in the Phoenix Convention Center. Hope to see you there!

Why a content spec saves you time and money

A content specification will save you troubles, time, and money, especially when you’re not the lone writer on a documentation project. It will ensure that you offer your users consistent and holistic documentation across a team of writers.

A content specification is a list of all topics to be created which ideally maps planned topics to requirements and/or designs to ensure comprehensive and complete documentation. It usually comes in a table with one row per topic, listing:

  • Topic heading and/or file name
  • Topic type (concept, task, reference, or whatever else you may use)
  • Topic owner
  • Writer (in case writers may be different from topic owners)
  • Reviewers (for example, subject-matter experts)
  • Date ready for review or for post-review editing (depending on your workflow)
  • Mapped deliverables (where the topic appears, for example, a certain user manual, the online help, etc.)
  • Time estimate (how long will it take to write the topic, optionally, including review)
  • Documentation task type, to help you estimate time:
    • Create new topic
    • Major rewrite of existing topic
    • Minor fix or addition to existing topic

Without it, you risk delivering a bunch of topics with gaps in some places and overlaps in others. You can still string them together, but no overview topic can convey a coherent content experience, if you didn’t plan for it and bake it into the topics and their structure.

So a content spec is a blueprint of your documentation project, just as you would create one before you start building a house – or design any kind of experience.

Yet content specs often elicit negative reactions…

“Oh, but we’ve managed without one so far…”

Many tech writers I know are very competent, and a few are lucky to boot. Considering all their projects with more than, say, 50 topics which didn’t use a content spec, I’d bet half of them are incoherent (“organically grown” is an oft-used euphemism).

The cost doesn’t stop at poor user experience. Such examples are also more difficult and more expensive to maintain, especially if you have overlapping topics and don’t remember to update both of them…

“Bah, reality eats specs for lunch…”

To an extent, yes. But on the whole, reality is an orderly patron. In my experience, the final documentation reflect the approved content spec in up to 80% of the topics. An average 10% of the topics get added during the writing, where concepts or prerequisite and auxiliary procedures are found missing. Another 10% of the topics get reorganized because the initial content spec misunderstood something, or because content simply makes more sense somewhere else.

“Even if, we’ll fix it later…”

Yes, you can. But once again it’s very expensive. Remember that the list of topics is only one result of the content spec. Their structure is another. Finding that a structure by workflows is inferior to a structure by, say, instrument, requires not just re-ordering topics, but re-writing a lot of them.

You can avoid this by drawing up a complete content spec before you write a single topic and getting it signed off by the key stakeholders, so they know rather well what documentation they will get. The 20% deviations mentioned above are usually justifiable, if they conceivably improve the deliverables.

– Given that content specs are a big help in creating and maintaining efficient and effective user documentation, I strongly recommend using them. If you have any experience with or without content specs, I’d love to hear it.

2nd day of sessions at TCUK 13

The business and managing of tech comm was the predominant topic of my TCUK13 experience, as I reflect some more on the sessions I attended and the conversations I joined.

A. Westfold on collaborative authoring in DITA

Andrew presented a case study of McAfee over several years, from separate product teams and “artisanal”  lone writers to a larger, unified team of writers collaborating in DITA. During this time, McAfee also grew by acquisitions which meant that additional writers, methods and tools came on board. Here are the most essential stages of their journey:

  1. Improve several individual procedures for quick wins: Single sourcing reduced translation efforts. Automating the translation round-trip cut out costly manual layout efforts.
  2. Move to topic-based authoring: They chunked up content into topics and moved them into DITA to validate the topic structure. (It turned out that many task topics could not be automated and essentially had to be rewritten in valid structure.)
  3. Bring in a content management system to reap the full benefit from single sourcing and topic-based authoring. This helped to reduce the number of redundant topics and to make localization even more efficient.

While their journey is far from finished, McAfee has realized the following benefits so far:

  • Easier administration of topics than of larger content chunks before. It’s also easier to solicit reviews for smaller stand-alone chunks.
  • Faster, more consistent creation of deliverables for several product variants thanks to better use of standard templates.
  • Documentation processes align well with recently introduced agile development processes.
  • More efficient, streamlined workflow thanks to better integration between documentation and localization.

I really enjoyed Andrew’s presentation. It showed that projects to improve tech comm do work out, even if you don’t always see past the next stage, and you may have to adopt due to other changes in the company.

A. Warman on “Managing accessible mobile content”

Adrian Warman from IBM hooked up two important tech comm issues, accessibility and documentation for mobile, into a survey session.

Accessibility makes it easier for everyone to fit in, participate and contribute, irrespective of disabilities. In short, it ensures that a user’s disability does not mean a personal disadvantage. For tech comm, this means that sufficient documentation is accessible. For example, if your online help in HTML is accessible, it’s not necessary to make the same contents in PDF accessible as well – or vice versa, as the case may be. Adrian advised us to keep an eye on “EU mandate M 376” which may soon make some level of accessibility mandatory for products traded within the EU.

Mobile (smartphones and tablets) for tech comm means not just a technology, but an expectation, a mindset. It’s more than simply fitting our output onto smaller screens. Its different dimensions of interactivity, such as progressive disclosure and user-generated content, challenges us tech writers to re-think how to best convey an idea. Which is the best taxonomy that supports both, mobile devices and accessibility?

I don’t think there was a lot of new, revolutionary content here, but since I haven’t dealt much with either topic so far, it was a welcome introduction that was concise and well presented.

E. Smyda-Homa on useless assistance

Edward reported on his twitter project @uselessassist where he “Retweets to remind organizations of the frustration and negative emotions that result from poorly prepared assistance.” He presented many examples of poor user assistance. Some people complained about insufficient instructions, whether they had not enough images or only images. Some found the instructions too long (“I know how to prepare toast!”) or too short or redundant. Some pointed out typos or bad translations.

This was a very entertaining session – and you can easily get the gist of it by simply looking up the account or following the twitter feed. It’s anecdotal evidence in real-time that users actually do read the manual – or at least try to.

While every tweet is heartfelt, I think not every one merits a change in the documentation – if only because some are contradicting each other. But I find Edward’s project very enlightening and nodded to myself in embarrassed recognition a couple of times…

– Feel free to leave comments about any of the sessions, whether you have attended them or not.

“Bake your taxonomy” workshop at #tcuk13

Knowing your audience, their needs and use cases is key, not only when writing documentation, but also when designing its topic structure, navigation structure and taxonomies. That’s the insight  around 50 participants came to at the end of the “Bake your taxonomy” workshop which Chris Atherton and I facilitated at the first day of TCUK13 in Bristol.

The insight itself is not revolutionary, of course, but it gave attendees a chance to try out content modelling and card sorting first-hand and consider alternative designs and difficult decisions that go into structuring documentation just right.

Explaining taxonomies and content models

Chris and I started the 3-hour workshop with a 30-minute presentation:

Organically grown content often develops into a mess of good, bad and ugly content with little or no discernible structure. An information architecture that was designed by central oversight and with a guiding higher principle might resemble a cathedral – but the organically grown reality more often resembles a bazaar.

Both models have their drawbacks: The cathedral might be out of touch with what users need to do and know in their daily lives. The bazaar supplies that better – but it’s much harder to navigate, unless you know it really well.

Chris and I presenting (photo by @JK1440)

Chris and I presenting (photo by @JK1440)

Enter taxonomies, which are hierarchical classification systems. Just as children and veterinarians use different systems to distinguish and classify animals, so users and we who write for them can distinguish different topic types and structures and different ways to navigate topics according to their needs and use cases.

Exercises: “Bring out the scissors!”

Then we formed 12 groups of approx. 4 and set off on a couple of exercises:

  • Content modelling. Take a documentation set (in our example a user manual for a handheld audio recorder) and develop topic types and content models for users, their needs and use cases. Then re-chunk the manual into new topics according to topics types and users.
  • Card sorting. Take the topics and find the best sequence and hierarchy for them.  Also consider the documentation format such as print, online, etc., and topic re-use opportunities between different formats and use cases.
Workshoppers baking their own taxonomy (photo by @jk1440)

Workshoppers baking their own taxonomy (photo by @JK1440)

After the first exercise, we had a short roundup of the different approaches and results of the groups and a short break, before we embarked on the second exercise.

As it turns out, it’s really difficult to separate between content modelling (structuring within topics) and card sorting (structuring of topics). And in many cases there might be few benefits to separate those tasks. However, if you do the content model first and in isolation, you might have a more stable content model that lends itself to more than the structure you’ve used to pour it into.

To sum up, it was a very lively workshop with many good discussions – mostly within the groups of four, but also in the roundups when we collected approaches and insights. Chris and I thoroughly enjoyed it and learned a lot about what a diverse bunch not only tech comm audiences, but also we as practitioners can be.

If you’ve attended the sessions or want if to know more about what happened and how, feel free to leave a comment.

The best KPIs support your tech comm strategy

The best Key Performance Indicators (KPIs) in tech comm are aligned to measure the success of your documentation strategy.

That’s some advance insight I got from Rachel Potts who will run a workshop about “Developing KPIs” for tech comm at TCUK in Bristol in a few weeks.

Measuring performance

KPIs are “a type of performance measurement to evaluate success… often success is simply the repeated, periodic achievement of some level of operational goal (e.g. zero defects, 10/10 customer satisfaction, etc.). Accordingly, choosing the right KPIs relies upon a good understanding of what is important to the organization.” (Wikipedia, “Performance indicator“)

But KPIs can be tricky! Says Business Administration professor H. Thomas Johnson: “Perhaps what you measure is what you get. More likely, what you measure is all you get. What you don’t (or can’t) measure is lost.” (Quoted and explained in a Lean Thinker blog post)

KPIs in tech comm

Some KPIs in tech comm are also deceptive. To pick a glaring example, measuring grammatical and spelling errors per page is comparatively easy and will probably help to reduce that figure. But one very fast way to improve this KPI is by changing the page layout, so there’s less text per page. Fewer words and more pages lead to fewer mistakes per page – without correcting a single word. Also, the measure won’t improve documentation that’s out of date or incomplete or incomprehensible.

Rachel advised me: “It depends on strategy and purpose: What’s right for one team is completely wrong for another. Measuring errors on the page is only a valuable KPI if the number of errors on a page relates closely to the purpose of your documentation. If there is a close relationship, then that’s a useful KPI!”

Strategic KPIs

So what would be alternative KPIs, depending on particular tech comm strategies?

If your strategy is to make customer support more cost-effective, you can measure (expensive) support calls against (cheaper, self-service) documentation traffic, while trying to align your documentation topics, so they can effectively answer support questions.

If your strategy is to improve your net promoter score and customer retention, you can measure users’ search terms for documentation, number of clicks and visit time per page, while trying to optimize content for findability and relevance to users’ search terms.

If your strategy is to improve content reuse and topic maintenance, you can measure redundant content to drive down the number of topics that have mixed topic-type content:

  • As long as you still have abundant conceptual information in task topics, you probably have redundant content. (Though a couple of sentences for context can be acceptable and helpful!)
  • As long as you have window and field help reference information in task or concept topics, you propbably have redundant content.

What do you think? What KPIS are helpful? Which are you using, if any?