Structured topics, taxonomies & lightning at #STC14

More than 600 technical communicators met for the annual STC Summit in Phoenix, AZ, to demonstrate and expand the many ways in which they add value for users, clients, and employers. In a series of posts, I describe my personal Summit highlights and insights that resonated with me.

The journey to structured topics

I’ve framed my presentation “From Unstructured Documentation to Structured Topics” as a journey to a fjord precipice: Daunting, but nothing you cannot achieve with a some planning and a little bit of confidence.

Concluding slide for my presentation on structured topics

The summary outline during Q&A, photo by @dccd.

In this “project walk-through mini-workshop”, I outlined how we can combine core tech comm proficiency, such as topic-based authoring, with content strategy and project management skills to master the migration to structured topics. The applied skills and the resulting content architecture can be a solid foundation for a full-blown future corporate content strategy that highlights technical communicators and their skills.

The engaged Q&A afterwards showed that the ideas resonated with the 80+ attendees. Many technical communicators are comfortable and well qualified to expand their topic-writing skills into information architectures and content modelling.

The trip to taxonomy

In her session “How to Create and Use a Functional Taxonomy“, Mollye Barrett told of a similar challenge: She was originally brought in to create the documentation for a highly customised implementation of financial software. When it became apparent that not just the software needed documentation, but also the workflows and processes which it was supposed to support, she wound up creating a taxonomy!

As she laid out her case study, Mollye showed how technical communicators’ core skills of task analysis and task-oriented documentation qualify them to create a taxonomy of business functions that maps a software’s functions to specific user tasks.

The project essentially consisted of explicating the company’s multi-faceted tacit knowledge and connecting all the pieces:

  • Create a consistent terminology by defining the standard financial terms in use.
  • Describe and classify the various functions of the software.
  • Identify and describe the users tasks which need documentation.

Mollye studied disparate, unstructured legacy documents, examined the software, and worked with specialists from the business and IT sides. Her main driver was her persistence to eliminate ambiguity, her goal to define clear terms – or put more simply: to create order out of chaos.

Lightning strikes twice

A popular staple of the STC Summit is the two lightning talk rounds, moderated with understated wit by Rhyne Armstrong.

Liz Herman drove forward the multi-skilled tech comm theme with multiple costume changes in her talk “Perfecting the Hat Trick, Why My Hair’s Messy“. She demonstrated how tech comm’ers don the hats, caps, and helmets of sailors, fire fighters, cowboys, football players, the Irish, something I’ve forgotten and many more in just five minutes:

Liz Herman wearing different hats

Liz Herman dons diferent hats, photo by @dccd.

And Viqui Dill showed us how to use social media right in “Social Media is not the Devil“, her rousing karaoke performance to the tune of Charlie Daniels’ “The Devil Went Down to Georgia”:

Viqui Dill's karaoke lightning talk

Photo by @marciarjohnston.

 

Advertisement

Preview my STC14 session about structured topics

If you are curious about moving from unstructured documentation to structured topics – or if you cannot decide whether my session at the STC Summit next week is for you – here are the slides, maybe you find them helpful:

Moving to topics? Join me at STC Summit!

If you’re moving to topic-based authoring (or considering the move), join me next week at the STC Summit in Phoenix for my presentation “From Unstructured Documentation to Structured Topics“.

The format will be a “project walk-through mini-workshop” in a regular session slot of 45 minutes. That means you won’t get a detailed project plan or silver bullet for a successful migration to topics. But you will get plenty of information about the involved methods, options, and risks. Most importantly, you will get a chance to improve your confidence – and hence your chances for success – for such an important project!

Here’s the abstract:

You’re sold on the benefits of structured content, but don’t know how to begin? This session shows you how to implement topic-based authoring by converting existing unstructured documentation into structured topics, even in regular office software such as Word.

The underlying process works for online help, user manuals, but also other content, such as wiki articles, training materials, etc., as long as you know which deliverables you need to create and their approximate purpose.

There are several stages to the process:

  1. Identify topic type or types per content section, for example, concept, task, reference, or use case. Content which mixes topic types can be sorted out with a little care.
  2. Re-chunk your sections to turn them into stand-alone topics. You can delete redundant or obsolete information which does not belong into a topic. Or you can spin it off into a topic of its own or integrate it with another, more suitable topic. Special strategies help you to deal with topics that are too complex.
  3. Re-sequence your topics, so they flow nicely when users read not just one or two of them, but need to follow a complete process. If the topic sequence doesn’t flow nicely, you may need to add some auxiliary topics which orient readers and ensure a good flow.
  4. Rewrite headings to guide readers to give users enough orientation when they read just one or two topics. Rephrase them so users can quickly dip in and out of your documentation.
  5. Add links between related topics to ensure that the structured topics work in various use cases, even if users refer only to few topics.

This presentation emphasizes practical tasks; you will

  • How and why to create a content model
  • How to identify topic types in existing content
  • How to re-chunk content into true topics
  • How to sequence your topics
  • How and why to write good headings for your topics
  • How to link related topics

We’ll meet on Monday, 19 May at 9:45 in 106 BC in the Phoenix Convention Center. Hope to see you there!

Top 3 fixes when editing topics

A few recurring issues distinguish the editing of structured topics from that of other content. Here are the top 3 issues I’ve recently found while editing topics for language, consistency and structure.

1. Topic heading is weak

A heading can seem appropriate when you’re writing the topic, but it may still seem weak when you see it in the context of the entire help system, for example, in a list of search results.

Write topic headings so they give your readers a precise idea what that topic is about. Yes, that can be challenge when you’re trying to be precise and concise at the same time:

  • Indicate what kind of information a topic contains. For task topics, consider starting the heading with an imperative verb. For concept topics, consider using noun phrases.
  • Offer enough context so your reader can identify what area or functionality in the product the topic refers to.

2. Purpose or user benefit is missing

A topic can look great and complete: It does whatever self-contained thing it set out to explain. But if it doesn’t also contain a why and wherefore, it’s harder for the reader to understand whether they’ve come to the right topic and how it helps them.

Include the purpose or user benefit of whatever the topic describes early on, in the first or second paragraph. Answer the readers’ potential questions:

  • Why is it important to know about or do whatever the topic describes?
  • How does this connect to my work?
  • How does it make my job or my life easier?

Be careful to describe the purpose or benefit of the topic’s content, for example, the actual concept or task, not the purpose of the topic. Focus on: “This function helps you…”, not “This topic tells you…”

3. Topic mixes topic types

A topic can look complete, comprehensive, and self-contained, but if it goes overboard and describes both, functional tasks and underlying concepts at length, it tends to overtax the patience of those who only need to know one of the two. It also makes it harder to assign a clear heading, to structure the topic clearly and to reuse the topic in additional contexts you might not yet know about.

Stick (mainly) to one topic type per topic. If you’re using the traditional triad of concept, task and reference topics, divide them accordingly, but keep it pragmatic. Ray Gallon and Mark Baker have both shown how a little conceptual information in task topics can go a long way, but they shouldn’t replace entire concept topics. Also see my previous post When topics don’t quite work from two years ago.

Summary

These top 3 issues and several others basically have the same underlying reason: We tend to write topics in the context of a function or a window, but that context is not necessarily familiar or identifiable to our readers.

Issues with the same reason also share the same basic solution: We should focus on writing topics in the context of our readers, their environment and their tasks in it.

Obvious structure in tech comm benefits all users

Informational text that exposes its “structural information, such as hierarchical relations” gets high reading comprehension scores, whether readers have prior subject knowledge or not. This is the result of a study reported in Learning Solutions Magazine by Chris Atherton. And it’s good news for technical communication because it means structured writing and topic-based authoring done well benefit novice and expert users alike.

The study

The study presented a 5,000-word article in three formats to two different groups. The three formats were:

  • A linear document of paragraphs
  • A hierarchical set of linked topics which was basically web site six levels deep
  • A mixed format which combined linear text presentation with links to related topics that didn’t expose structure or hierarchy

The two groups of audience were:

  • Novices without prior knowledge of the subject
  • Experts who had formal training in the subject

The results

It’s best to scroll down to the results graph over at the magazine website, but in case that disappears, here’s a summary of the different reading comprehension scores:

  • Novices understood the hierarchical format best, closely followed by the mixed format, with the linear format a distant third.
  • Experts understood the linear format best, closely followed by the hierarchical format, with the mixed format a distant third.

So exposing the hierarchy and structure of the text benefits novices and experts alike. If you’re writing for experts only, presenting linear text gives them a slight advantage, but “shuts out” novices.

The implications for tech comm

  • Structure authoring helps your users understand and remember. Novice and expert users alike can make sense of the information not only from the individual bits and pieces, but also from the structure how everything hangs together. For example, consider relating concepts and sub-concepts to on another. Or when instructing users to do tasks, consider giving an overview of the big picture process first. Then break down the process clearly into distinct procedures and further into individual steps. For many readers, easy access to structure also helps them to retain information better, regardless how they manage to memorize it.
  • Structured authoring helps you to create complete documentation efficiently. You can organize and maintain your information more efficiently with structure and hierarchy. Structure makes it easier to ensure that each piece of information has a distinct place, so you can avoid redundancies. Hierarchies make it clear where your concepts and procedures are complete and where you still have gaps. It’s easier to note a missing topic or sub-chapter than a missing paragraph somewhere in linear text.
  • Limited advantage of linear text. The study showed that linear text in paragraphs is most comprehensible for expert readers. But I think the advantage of this format is in general limited:
    • For novices, linear text is a distant third, so relying on “linear” requires that you have a homogenously expert audience.
    • For you as a writer, linear text possibly takes more time or effort to maintain, depending on how much text you maintain and how often you update.
    • For other writers who need to edit or update your documentation, linear text is probably harder than topics that expose the internal structure of the subject matter.

By the way: Chris Atherton and I will lead a workshop together at TCUK13 in Bristol on 24 September. So if you’re in the area and want to “Bake your own taxonomy”, consider joining us. 🙂

Getting mileage from a tech comm mission statement

If you have a mission statement for technical communications, you can use it to anchor several strategic and tactical decisions. I’ve suggested a few general reasons Why you need a tech comm mission statement in my previous post. The valuable discussion that ensued led me to think we can get some mileage from a mission statement in some high-level tasks further downstream.

Consider a mission statement that says: “Our product help provides users with relevant product information at the right time in the right format.”

Defining audiences and deliverables

You can keep your audience in focus with a mission statement. Do you write for end users? Maybe there are different types, such as professionals vs. amateur hobbyists? Do you also address colleagues who expect to find internal information in the documentation? The mission statement above doesn’t specify it – and hence can be expected to address everyone who uses the product.

You can also derive your deliverables from a mission statement. Do you publish to several formats or only to one? What is your priority of formats? Web help first, PDF second seems a standing favorite that’s recently been disrupted by the emergence of mobile output. The mission statement above merely mentions the right format – so you need to figure out what format is right for your audience types. You can use personas to determine how your users work with the product – or better yet: Observe or survey them!

Defining information model and processes

You can derive your information model, the structural standard of your documentation, from your mission statement. This model should help you to reach the goal described in your mission and serve your audience. For example, topic-based architectures have long been popular. If you need to retrieve small chunks of information, for example to share steps in a task or exception handling advice, consider a more granular standard such as DITA.

Your processes should outline a repeatable, efficient and effective way to create your deliverables so they address your audience and, once again, help you to achieve your mission goal.

Your information model can suggest which topics or elements to create need to be created and updated for a given product or enhancement. Together with your processes, this makes it easier to plan and estimate documentation efforts – in theory at least…

– But with some management support and some persistence, a mission statement and some strategic decisions piggy-backed on to it can help you get out of the proverbial hamster wheel.

What do you think? Can this be helpful? Or is it too far removed from real life? Do you have any experience with a larger documentation strategy based on a mission statement? If so, did it work?

Scott Abel on Structured Content at TCUK12

Scott Abel delivered his keynote It’s All About Structure! Why Structured Content Is Increasingly Becoming A Necessity, Not An Option in his usual style: Provocative, but relevant, fun and fast-paced (though he said he was going to take it slow). He even channeled George Carlin’s routine on Stuff: “These are ‘MY Documents’, those are YOUR documents. Though I can see you were trying get to MY Documents…”

His style doesn’t translate well onto a web page, so I’ll restrict myself to his 9 reasons Why Structured Content Is Increasingly Becoming A Necessity:

  1. Structure formalizes content, so it can guide authors who need to make fewer decisions when writing it. It also guides readers who can find more easily where the relevant information is in the whole documentation structure or within a topic. And it guides computers which can extract relevant information automatically and reliably.
  2. Structure enhances usability by creating patterns that are easy to recognize and easy to navigate with confidence.
  3. Structure enables automatic delivery and syndication of content, for example, via twitter – and you’ll be surprised occasionally when and how other people syndicate your “stuff”.
  4. Structure supports single-sourcing which means you can efficiently publish content on several channels, whether it’s print or different online outputs, such as a web browser, an iPad or a smartphone.
  5. Structure can automate transactions, such as money transfers, whether they are embedded in other content or content items in their own right.
  6. Structure makes it easier to adapt content for localization and translation, because you can chunk content to re-use existing translations or to select parts that need not only be translated but localized to suit a local market.
  7. Structure allows you to select and present content dynamically. You can decide which content to offer on the fly and automatically, depending on user context, such as time and location.
  8. Structure allows you to move beyond persona-ized content. This is not a typo: Scott doesn’t really like personas. He thinks they are a poor approximation of someone who is not you which is no longer necessary. With structured content (and enough information about your users) you can personalize your content to suit them better than personas ever let you.
  9. Structure makes it much easier to filter and reuse content to suit particular variants, situations and users.

TCUK12, day 1: Workshops & company

The first day of the ISTC conference TCUK12 offered workshops and great opportunities to meet tech comm’ers from all walks of life and many corners of the earth.

When I arrived late on Monday evening, I promptly headed for the bar and joined the advance party for a last round – which lasted so late that I’m not even sure in which timezone it was 2 am before I turned in…

Robert Hempsall: Information Design 101

Robert Hempsall offered a great and engaging hands-on Information Design 101 workshop about information design. The workshop focused on the five key areas of content and structure, language, layout, typography, and lines and spatial organization. Using a formal application to vote in English elections by mail, Robert led us through the process of designing the form to maximize clarity and usability.

Thanks to our versatile and engaged group of delegates, our work on the form was not only lively, but showed how different disciplines contribute to the solution of better information design, from tech comm (with its principles of minimalism and parallelism) via user interface design (with its emphasis on making completion of the form as easy and painless as possible) to graphic design.

In this sense, the workshop presented a good example of “design by committee” (which is usually a terrible idea): We discussed the most intuitive and user-friendly sequence of the form’s elements and how best to phrase the section headings, as questions or as imperatives. A seemingly innocuous “all of the above”  check box also caused a debate: Should it precede the individual options, to make completing the form quick, easy and painless? Should it come last, so users hopefully first read and reflect on the options? Or should it be omitted altogether, so users have to think about each option and select all that apply.

Form design is maybe not among the core tasks for many tech writers. Yet I’ve found several challenges in it that are strikingly similar to getting a topic structure just right, whether it’s a consistent, indicative heading, good, clear instructions or logical structure.

Rowan Shaw: Quality Across Borders

Rowan Shaw‘s workshop Quality Across Borders: Practical Measures to Ensure Best-Value Documentation in Global Technology Businesses focused on creating documentation both with authors and for users who have English as a second language (ESL).

As in introduction, Rowan presented us with 10 sentences each of which had some element that can create a problem for ESL readers, ranging from “10/03/12”, which could mean 3 October or March 10, to metaphors and slang.

If you need to hire ESL authors, it can be helpful to ask applicants to sit for an exam which tests skills such as procedure writing, fluency of expression, structuring, detail, consistency – but also their motivation for applying, to spot those candidates who want a foot in the door, but might not be interested in tech comm in the long term. We discussed a sample test, whether it was applicable and appropriate in all cultures.

Rowan suggested that, given the practicalities of global ESL authors, you might have to settle for less than perfect profiles in candidates. Then it is important to know which skills are easier to teach someone on the job, for example, grammar, structuring, capitalization, punctuation and how to use a style guide. Other skills are harder to teach, such as an eye for detail, audience orientation, logical thinking, but also more intricate language skills, such as prepositions and correct modifiers.

Again, this workshop benefitted tremendously from the diverse talents in the room and the experiences delegates brought to the topic.

The right company

I keep harping on how much I enjoy and benefit from meeting other tech comm’ers. Just on the first day:

  • I found that several other doc managers are also wary to hire subject matter experts, who are less committed to tech comm, because they might just want a foot in the door (see above).
  • I had an immensely helpful conversation with someone who’s a visiting professor and who could give me tips and ideas that I can try as I consider teaching as a future path.

So day 1 was very fruitful already, and I’m looking forward to more sessions and conversations to come.

A. Ames & A. Riley on info experience models at STC12

Andrea Ames and Alyson Riley, both from IBM, presented a dense whirlwind tour on “Modelling Information Experiences” that combine four related models into a heavy-duty, corporate information architecture (IA).

While the proceedings don’t include a paper on this session, Andrea posted the slides, and the presenters published a related article (login required) “Helping Us Think: The Role of Abstract, Conceptual Models in Strategic Information Architecture” in the January 2012 issue of the STC’s intercom journal.

The session proceeded in six parts. First, Alison explained IA models in general and how they work. Then Andrea described each of the four model types that make up an IA specifically.

IA models as science and art

Information architecture relates to science as its models draw on insights and theories of cognition. And its models relate to art as they aim to create a meaningful experience. Both aspects are important. Only if IA models manage to blend science and art can they touch the head and the heart.

The session focuses on IA models instead of theories (which are too vague and abstract) or implementations (which are too specific and limited). Thanks to the in-between position of IA models, we can use them to

  • Ask the right questions to arrive at a suitable, feasible IA
  • Tolerate the ambiguities of “real life”

Models present descriptive patterns, not prescriptive rules. They don’t say how stuff must be, but how it can be represented. They differ from real life, but real life is still recognizable in them.

That means you cannot simply implement a model on autopilot and get it right. Instead, you have to

  • Think how to implement the model
  • Vary the model for your users’ benefit
  • Listen to the right knowledgeable people when implementing

Models help you think

To arrive at your concrete IA, you take the model’s abstract patterns and apply your project-specific details to them, the who, what, where and when.

This task is less mechanical and less copy-and-paste than it sounds. It’s not so much a question of following rules and recipes, but of making abstract patterns come to life in a coherent, flexible whole. (If you’ve ever tried to create meaningful concept or task topics by following an information model, you know it’s more than just filling in a DITA template. You need to use your own judgment about what goes where to achieve and maximize user benefit.)

Since there are four related models, you need to think carefully how each of these models should benefit your users. And the models help you think, they scale and adapt to:

  • How your business and its information requirements develop over time, how they grow and expand in desired directions
  • How your customers find, understand and apply the information they need

The four IA model types

The IA model that Andrea then explained consists of four related model types:

use model (content model + access model = information model)

Each of these model types needs to be developed and validated separately.

The use model defines how users interact with information. It describes standard scenarios for optimal user experience within the product or system context. It outlines what information users need and why and how they use it. It includes use scenarios for the entire product life cycle and user personas that outline different types of users. Fortunately for us technical communicators, Andrea pointed out, describing all this is part of our core skill set.

The content model defines how information is structured effectively. This could be DITA (in the case of the company I work for, this is what we call our DITA-derived “information model”). It includes the granular information units and their internal structure, for example, task topics and their standard sequence of contained information. It also describes how these granular units are combined into deliverables.

The access model defines how users access the information efficiently. It makes provided information useable, thanks to a navigation tree, a search function, a filtering function to increase the relevance of search results, an index, etc. Artefacts of this model type are wireframes, storyboards, a navigation tree and the like.

The information model defines how users get from one stage to the next. It uses the other three model types as input and fills in the gaps. It combines the content and access models which probably work fine during the installation and configuration processes, but may not yet carry a user persona from one stage to the next. As such, the information model is sort of the auxiliary topic that you sometimes need to insert between concept, task and reference topics to make a complete book out of them. At the same time, this model type is more detailed than the use model and closer to the other two types.

My takeaways

I was extremely grateful for this session and rank it among the top 3 I’ve seen at the summit – or any tech comm conference I’ve been to! Yes, it was fairly abstract and ran too long – my only complaint is that it left only 2 minutes for discussion at the end.

As abstract as much of the session was, it actually solved a couple of problems I couldn’t quite put into words. After designing and teaching our company’s DITA-derived information model (which is a “content model” by this session’s names), I thought our model was applicable and useful, but it had two problems:

  • Our model lacked context. – Now I know that’s because we haven’t mapped out our use model in the same detail and failed to connect the two.
  • Our model was baked into a template for less experienced writers to fill in – with varying results. – Now I know that’s because the models are not supposed to provide templates, but require writers to use their own judgment and keep in mind the big picture to deliver effective information.

Another lesson I learned is that many structured information paradigms seem to include a less rigid element that sweeps up much of the miscellaneous remnants. In DITA, there’s the general topic which is used for “under-defined” auxiliary topics to give structure, especially to print deliverables such as manuals. In this IA model, there’s the information model which fills the gaps and ensures a more seamless user experience than the other three models can ensure.

– For an alternative take, see Sarah Maddox’ post about this session.

Neil Perlin, Developing for the Unknown at STC12

It’s of course impossible to develop techcomm content for formats that don’t exist yet, but Neil Perlin shows how you can prepare for them to future-proof your work – and job.

Problem

The problem he addresses is that content formats will change, we just don’t know yet how. The answer is generally to create solid content by setting and using open standards. By doing that (and avoiding proprietary standards, hacking code and tweaking our tools), we can ensure that our content will have a good life and be relatively easier to use in any future formats that might come down the line. And keeping with developments is important, so we decide which standards and tools can help us in the future.

Better structure

On the structural level, you can best standardize your content by using information types (also called “topic types”), such as task, concept, reference, troubleshooting, “show me”, etc. Try to keep your number of types below 10. (DITA, a structured authoring standard has only 1 general topic type and the first 3, I’ve just mentioned.)

Once you have your information types defined, you can create templates that already contain the topic structure plus some explanatory boilerplate text which you and other writers can replace with your actual content.

Multiple output formats

On the technical level, CSS is a great medium to future-proof your content: It lets you create your topics independent of layout. And it is powerful enough to support just about any HTML-based format to make your content (and you as its author) look good on the web, on tablets and on mobile.

Using CSS to future-proof your content means, among other things:

  • Keep your CSS definition simple and clear for easier use and better maintenance.
  • Define all your output formats in one CSS file by defining different media types for different devices, web and print.
  • Check that the most common browsers and devices which you write for support CSS correctly. (If not you may have to live without some CSS features for the time being.)
  • Use relative style size units instead of points (“pt”) which are not resizable.
  • Use styles for tables instead of hand-tweaking column width, etc.
  • Use character styles instead of “Bold” or “Italics” as in Word’s style toolbar.

To migrate your content from more proprietary formats may require some cleanup of embedded and inline styles. Some of this can be scripted, but you may still find yourself with a lot of semi-automatic search and replace.

Adopt, don’t adapt

On a methodological level, Neil advises to adopt a standard because it suits you and your business needs. Adapting your content to a standard because it’s common or new without a business case is not a good idea. So put your company and your job first and build up in-house knowledge about standards, formats and tools, so you can make the right choices.

Verdict

It takes Neil’s long experience to successfully combine general career advice with specific tips about CSS. I really enjoyed this session. I’ve learned a few new tricks and got general confirmation that our strategy to migrate our documentation to XML in a DITA-derived structure was the sensible thing to do.