TCUK12, day 1: Workshops & company

The first day of the ISTC conference TCUK12 offered workshops and great opportunities to meet tech comm’ers from all walks of life and many corners of the earth.

When I arrived late on Monday evening, I promptly headed for the bar and joined the advance party for a last round – which lasted so late that I’m not even sure in which timezone it was 2 am before I turned in…

Robert Hempsall: Information Design 101

Robert Hempsall offered a great and engaging hands-on Information Design 101 workshop about information design. The workshop focused on the five key areas of content and structure, language, layout, typography, and lines and spatial organization. Using a formal application to vote in English elections by mail, Robert led us through the process of designing the form to maximize clarity and usability.

Thanks to our versatile and engaged group of delegates, our work on the form was not only lively, but showed how different disciplines contribute to the solution of better information design, from tech comm (with its principles of minimalism and parallelism) via user interface design (with its emphasis on making completion of the form as easy and painless as possible) to graphic design.

In this sense, the workshop presented a good example of “design by committee” (which is usually a terrible idea): We discussed the most intuitive and user-friendly sequence of the form’s elements and how best to phrase the section headings, as questions or as imperatives. A seemingly innocuous “all of the above”  check box also caused a debate: Should it precede the individual options, to make completing the form quick, easy and painless? Should it come last, so users hopefully first read and reflect on the options? Or should it be omitted altogether, so users have to think about each option and select all that apply.

Form design is maybe not among the core tasks for many tech writers. Yet I’ve found several challenges in it that are strikingly similar to getting a topic structure just right, whether it’s a consistent, indicative heading, good, clear instructions or logical structure.

Rowan Shaw: Quality Across Borders

Rowan Shaw‘s workshop Quality Across Borders: Practical Measures to Ensure Best-Value Documentation in Global Technology Businesses focused on creating documentation both with authors and for users who have English as a second language (ESL).

As in introduction, Rowan presented us with 10 sentences each of which had some element that can create a problem for ESL readers, ranging from “10/03/12”, which could mean 3 October or March 10, to metaphors and slang.

If you need to hire ESL authors, it can be helpful to ask applicants to sit for an exam which tests skills such as procedure writing, fluency of expression, structuring, detail, consistency – but also their motivation for applying, to spot those candidates who want a foot in the door, but might not be interested in tech comm in the long term. We discussed a sample test, whether it was applicable and appropriate in all cultures.

Rowan suggested that, given the practicalities of global ESL authors, you might have to settle for less than perfect profiles in candidates. Then it is important to know which skills are easier to teach someone on the job, for example, grammar, structuring, capitalization, punctuation and how to use a style guide. Other skills are harder to teach, such as an eye for detail, audience orientation, logical thinking, but also more intricate language skills, such as prepositions and correct modifiers.

Again, this workshop benefitted tremendously from the diverse talents in the room and the experiences delegates brought to the topic.

The right company

I keep harping on how much I enjoy and benefit from meeting other tech comm’ers. Just on the first day:

  • I found that several other doc managers are also wary to hire subject matter experts, who are less committed to tech comm, because they might just want a foot in the door (see above).
  • I had an immensely helpful conversation with someone who’s a visiting professor and who could give me tips and ideas that I can try as I consider teaching as a future path.

So day 1 was very fruitful already, and I’m looking forward to more sessions and conversations to come.

Advertisements

A. Ames & A. Riley on info experience models at STC12

Andrea Ames and Alyson Riley, both from IBM, presented a dense whirlwind tour on “Modelling Information Experiences” that combine four related models into a heavy-duty, corporate information architecture (IA).

While the proceedings don’t include a paper on this session, Andrea posted the slides, and the presenters published a related article (login required) “Helping Us Think: The Role of Abstract, Conceptual Models in Strategic Information Architecture” in the January 2012 issue of the STC’s intercom journal.

The session proceeded in six parts. First, Alison explained IA models in general and how they work. Then Andrea described each of the four model types that make up an IA specifically.

IA models as science and art

Information architecture relates to science as its models draw on insights and theories of cognition. And its models relate to art as they aim to create a meaningful experience. Both aspects are important. Only if IA models manage to blend science and art can they touch the head and the heart.

The session focuses on IA models instead of theories (which are too vague and abstract) or implementations (which are too specific and limited). Thanks to the in-between position of IA models, we can use them to

  • Ask the right questions to arrive at a suitable, feasible IA
  • Tolerate the ambiguities of “real life”

Models present descriptive patterns, not prescriptive rules. They don’t say how stuff must be, but how it can be represented. They differ from real life, but real life is still recognizable in them.

That means you cannot simply implement a model on autopilot and get it right. Instead, you have to

  • Think how to implement the model
  • Vary the model for your users’ benefit
  • Listen to the right knowledgeable people when implementing

Models help you think

To arrive at your concrete IA, you take the model’s abstract patterns and apply your project-specific details to them, the who, what, where and when.

This task is less mechanical and less copy-and-paste than it sounds. It’s not so much a question of following rules and recipes, but of making abstract patterns come to life in a coherent, flexible whole. (If you’ve ever tried to create meaningful concept or task topics by following an information model, you know it’s more than just filling in a DITA template. You need to use your own judgment about what goes where to achieve and maximize user benefit.)

Since there are four related models, you need to think carefully how each of these models should benefit your users. And the models help you think, they scale and adapt to:

  • How your business and its information requirements develop over time, how they grow and expand in desired directions
  • How your customers find, understand and apply the information they need

The four IA model types

The IA model that Andrea then explained consists of four related model types:

use model (content model + access model = information model)

Each of these model types needs to be developed and validated separately.

The use model defines how users interact with information. It describes standard scenarios for optimal user experience within the product or system context. It outlines what information users need and why and how they use it. It includes use scenarios for the entire product life cycle and user personas that outline different types of users. Fortunately for us technical communicators, Andrea pointed out, describing all this is part of our core skill set.

The content model defines how information is structured effectively. This could be DITA (in the case of the company I work for, this is what we call our DITA-derived “information model”). It includes the granular information units and their internal structure, for example, task topics and their standard sequence of contained information. It also describes how these granular units are combined into deliverables.

The access model defines how users access the information efficiently. It makes provided information useable, thanks to a navigation tree, a search function, a filtering function to increase the relevance of search results, an index, etc. Artefacts of this model type are wireframes, storyboards, a navigation tree and the like.

The information model defines how users get from one stage to the next. It uses the other three model types as input and fills in the gaps. It combines the content and access models which probably work fine during the installation and configuration processes, but may not yet carry a user persona from one stage to the next. As such, the information model is sort of the auxiliary topic that you sometimes need to insert between concept, task and reference topics to make a complete book out of them. At the same time, this model type is more detailed than the use model and closer to the other two types.

My takeaways

I was extremely grateful for this session and rank it among the top 3 I’ve seen at the summit – or any tech comm conference I’ve been to! Yes, it was fairly abstract and ran too long – my only complaint is that it left only 2 minutes for discussion at the end.

As abstract as much of the session was, it actually solved a couple of problems I couldn’t quite put into words. After designing and teaching our company’s DITA-derived information model (which is a “content model” by this session’s names), I thought our model was applicable and useful, but it had two problems:

  • Our model lacked context. – Now I know that’s because we haven’t mapped out our use model in the same detail and failed to connect the two.
  • Our model was baked into a template for less experienced writers to fill in – with varying results. – Now I know that’s because the models are not supposed to provide templates, but require writers to use their own judgment and keep in mind the big picture to deliver effective information.

Another lesson I learned is that many structured information paradigms seem to include a less rigid element that sweeps up much of the miscellaneous remnants. In DITA, there’s the general topic which is used for “under-defined” auxiliary topics to give structure, especially to print deliverables such as manuals. In this IA model, there’s the information model which fills the gaps and ensures a more seamless user experience than the other three models can ensure.

– For an alternative take, see Sarah Maddox’ post about this session.

Neil Perlin, Developing for the Unknown at STC12

It’s of course impossible to develop techcomm content for formats that don’t exist yet, but Neil Perlin shows how you can prepare for them to future-proof your work – and job.

Problem

The problem he addresses is that content formats will change, we just don’t know yet how. The answer is generally to create solid content by setting and using open standards. By doing that (and avoiding proprietary standards, hacking code and tweaking our tools), we can ensure that our content will have a good life and be relatively easier to use in any future formats that might come down the line. And keeping with developments is important, so we decide which standards and tools can help us in the future.

Better structure

On the structural level, you can best standardize your content by using information types (also called “topic types”), such as task, concept, reference, troubleshooting, “show me”, etc. Try to keep your number of types below 10. (DITA, a structured authoring standard has only 1 general topic type and the first 3, I’ve just mentioned.)

Once you have your information types defined, you can create templates that already contain the topic structure plus some explanatory boilerplate text which you and other writers can replace with your actual content.

Multiple output formats

On the technical level, CSS is a great medium to future-proof your content: It lets you create your topics independent of layout. And it is powerful enough to support just about any HTML-based format to make your content (and you as its author) look good on the web, on tablets and on mobile.

Using CSS to future-proof your content means, among other things:

  • Keep your CSS definition simple and clear for easier use and better maintenance.
  • Define all your output formats in one CSS file by defining different media types for different devices, web and print.
  • Check that the most common browsers and devices which you write for support CSS correctly. (If not you may have to live without some CSS features for the time being.)
  • Use relative style size units instead of points (“pt”) which are not resizable.
  • Use styles for tables instead of hand-tweaking column width, etc.
  • Use character styles instead of “Bold” or “Italics” as in Word’s style toolbar.

To migrate your content from more proprietary formats may require some cleanup of embedded and inline styles. Some of this can be scripted, but you may still find yourself with a lot of semi-automatic search and replace.

Adopt, don’t adapt

On a methodological level, Neil advises to adopt a standard because it suits you and your business needs. Adapting your content to a standard because it’s common or new without a business case is not a good idea. So put your company and your job first and build up in-house knowledge about standards, formats and tools, so you can make the right choices.

Verdict

It takes Neil’s long experience to successfully combine general career advice with specific tips about CSS. I really enjoyed this session. I’ve learned a few new tricks and got general confirmation that our strategy to migrate our documentation to XML in a DITA-derived structure was the sensible thing to do.

How to disrupt techcomm in your organization?

If you need to “disrupt” your tech comm content, I believe it’s more beneficial to integrate content across the organization than just to get tech comm to become more business-oriented or more like marketing.

The idea comes out of a worthy new collaborative project Sarah O’Keefe launched last week, Content Strategy 101: Transform Technical Content into a Business Asset. (This blog post is based on a couple of comments I’ve left on the site.)

Tech comm goes to business school

A recurring discussion is that tech comm needs to be more business-like to be justifiable in the future, not only on this blog but also elsewhere. Proponents of this view definitely have a point, if only because tech comm is often seen as a cost center and finds it hard to claim a return on investment.

I think, however, that this view is detrimental to all involved parties:

  • Tech comm risks to abandon its benefits to users and quality standards in an attempt to be “more like marketing”.
  • Managers may risk permanent damage to the documentation of their product without solving the bigger problem.

Breaking down all silos

The bigger problem often is that most content production is inefficient – because it occurs in parallel silos. Many companies have gotten good at making their core business more efficient. But they often neglect secondary production of content which remains inefficient and fragmented.

I’ve seen several companies where marketing, technical communications and training (to name just three areas) waste time and money. Due to inefficient, silo’ed processes, tools and objectives, they create similar, overlapping content:

  • Marketing and tech comm create and maintain separate content to explain the benefits of a product.
  • Tech comm and training write separate instruction procedures for manuals and training materials.

Once companies wake up to these redundancies, all content-producing units will face pressure to streamline content and make it easier to produce and reuse. This will revolutionize corporate content production and publishing.

Quo vadis, technical communicators?

I think this issue raises two questions for technical communicators.

The strategic question is:

Which kind of content disruption is more beneficial for the organisation and for customers: Folding tech comm into marketing or integrating all content with a corporate content strategy?

The answer depends on several issues, among them:

The tactical question is:

What’s the role of technical communicators in this content disruption: Are they the movers or the movees? Are they shaping the strategy or following suit?

The answer again depends on several issues:

  • What is your personality, clout and position in the organization?
  • Which team has the most mature content and processes to be a candidate to lead any kind of strategic change in content?

I think tech comm can lead a content strategy, especially if and when the tech comm team knows more about content than marketing or training or other content producers.

Concept or reference, what’s the difference?

The distinction between concepts and reference topics is much easier and clearer when both support strong and clear task topics.

Concept or reference?

One of the recurring difficulties when moving to structured writing and topic-based authoring is the distinction of topic types concept and reference. It’s an odd problem because the three topic types, concept, task and reference seem rather logical and clear-cut in theory.

I’ve found that the best remedy for the confusion is the motivation that lies beneath topic-based authoring: Task orientation. Think of it this way:

  • Task orientation is a design strategy for your documentation
  • Topic-based authoring is “only” the method to implement task orientation.

So concept topics and reference topics exist to support tasks. “The goal for users … is not to understand a concept but to complete a task.” (p. 41, DITA Best Practices: A Roadmap for Writing, Editing, and Architecting in DITA).

Let tasks lead the way

Much of the uncertainty whether a topic is a concept or a reference disappears, when you have strong, solid task topics in place. Topics that directly address your users and their daily tasks and help them to get their job done:

  • How do I create cool espresso drinks with my new coffee machine?
  • How do I clean the milk steamer?

Such tasks are the context in which the two definitions of concept and reference from the DITA 1.2 specification make sense:

  • Concepts “provide conceptual information to support the performance of tasks”.
  • Reference topics “provide for the separation of fact-based information from concepts and tasks”

Note that both definition recur to tasks, so task-orientation and tasks as above are at the heart of topic-based authoring.

To help readers understand the background to tasks, you can offer concepts about the kinds of espresso drinks there are and how they differ.

To support users to actually perform the tasks, you can offer reference topics with technical specifications such as required voltage and recommended water softness.

How to distinguish them?

If you’re in doubt in particular examples, maybe the table below can help you. I got some of the criteria from a Yahoo group discussion “Concept v Reference – Battle to the Death” and from a blog post on Dubious Prospects.

Concept topics Reference topics
Are abstract ideas Are specific settings
Explain meaning or benefit Give facts without much explanation
Can stay when specifications change Change with specifications
Support understanding of tasks Support correct execution of tasks
Are read for background information Are read for detailed information
“Why brushing your teeth is important” “Stages of tooth decay”

Half-way DITA: Why some is better than none

If DITA seems like a good idea, but you cannot make the case for it, you can move towards structured writing and make your documentation “future-proof” by meeting the standard half-way.

At the company I work for, we tech writers created manuals in parallel, but separate to online help. Over time, this gave us a documentation set that was inconsistent in places and hard to maintain to boot. Topic-based authoring which reuses topics in print and online can fix that, of course.

First, a documentation standard

Deciding on the method is one thing, but we also wanted a consistent structure that made the documentation easier and clearer to use – and easier to maintain for us writers. That required a model that specifies which kinds of topics we want to offer, how these topics are structured inside and how they relate to one another.

As we looked towards a documentation standard, we had two options:

  • We could create a content inventory of our documentation, analyse and segment it to tease some structure from that.
  • Or we could rely that others had solved a similar problem before and see if we can’t use the wheel someone else had already invented.

Turns out the second option was quite feasible: The DITA 1.2 specification gives us about all the structure we need – and more. We left out the parts we didn’t need (for example, some of the more intricate metadata for printed books) and adopted a kind of DITA 1.2 “light” as our information model.

Second, the tools

Note that I haven’t mentioned any systems or tools so far! Even though it happened in parallel to the rolling out topic-based authoring as our method and DITA light as our information model, the tool selection was mainly driven by our requirements on documentation workflows, structure, deliverables, and budget.

The tool that suited us best turned out to be MadCap Flare – even though it doesn’t create or validate DITA!

Using our information model in Flare, we believe we get most of the benefits of DITA – and considerable improvement over our less-than-structured legacy content. And speaking to people at WritersUA 2011, it seems that we’re not the only one to move from less-than-structured writing to XML and something “close to DITA”.

Technically, we’ve defined the DITA elements we need as divs in the Flare stylesheets, but otherwise use the straight Flare authoring-to-publishing workflow. Flare is agnostic to whether a topic complies with DITA, is somehow structured but not complying or totally unstructured.

The benefits of DITA, half-way

To us tech writers, the largest benefit of DITA, half-way, is that we can actually do it. We could not have gotten away with DITA, the full monty, which would have required a much longer project, a much bigger migration effort and hence, uncertain ROI.

For new topics, we are committed to writing them structured, so they follow the information model. To migrate legacy topics, we’ll have to ensure they have an identifiable topic type and a suitable heading, but we can cleaning up their insides in a “soft fade”, moving them towards structure one by one. This gives us a quicker win than cleaning up literally thousands of topics before having anything to show in the new method, model or system.

So we will have been working in Flare and with our home-grown information model for a long while, before all topics actually comply with the model. But then we will have a documentation set which we can feasibly move into real structure, whether we opt for DITA or some other XML-based CMS, with or without a CMS.

This post is an elaboration of a comment thread on the “Why DITA?” guest post on Keith Soltys’ Core Dump 2.0 blog.

5 steps from legacy documentation to topics

To move to topic-based authoring, you need to convert existing documentation into topics. The efforts shouldn’t be underestimated, but it’s actually a pretty straightforward process. I’m describing how to convert sections in manuals, but it’s much the same for most content, whether it’s FAQs, wiki articles, training materials, etc.

Prerequisites

  • Some knowledge of topic-based authoring. You should know the topic types you will use, for example, concepts, tasks and reference topics.
  • Have a tactical framework. Ensure you know the documentation structure you aim for and how you will get there in a project. Consider for example the Top 4 tactics to structure legacy documentation.
  • A style guide to which your documentation complies definitely helps.

1. Identify topic type(s) per section

Define each section in your manual as one of your topic types, for example, concept, task or reference.

If topic types are mixed…

A common problem at this point is that you may have topics that mix topic types. For example, your topic contains concept information and task information. Or task information and reference information. For details, see When topics don’t quite work.

2. Re-chunk sections to make them topics

Redistribute your content so each section becomes a topic:

  1. Cut out everything in a section that doesn’t fit the selected topic type and put it aside.
  2. Ensure that the topic that remains:
    • Is either a concept or a task or a reference.
    • Presents only one idea.
    • Has only one purpose.
    • Can stand alone, in context with others.
  3. Take the content you have cut out and put aside and do one of the following:
    • Integrate it into an existing topic where it fits better.
    • Create a new topic for it.
    • If it’s not relevant, throw it away. 🙂

If topics are too complex…

A common problem at this point is that you can end up with one or several topics that are too complex. Then you can try the following:

  • If you can describe a topic’s content as two separate, but related ideas, turn it into two sibling topics.
  • If you can fully summarise a topic’s content only as a very complex idea, turn it into a parent topic and create children topics for it.
  • If it’s a long procedure with more than 10 main steps in a top-level numbered list, try to cut it into two topics approximately in the middle.

3. Re-sequence your topics

Re-arrange the sequence of your topics, so they flow nicely when users read not just one or two of them, but need to learn a complete setup or operational process.

  1. Verify that your topics are in the best possible sequence and re-arrange them if necessary. You might want to try these categories:
    • Setup, in the sequence of tasks.
    • Operations, in the sequence of tasks.
    • Reference topics, in alphabetical order.
    • Concept topics can either appear as a section of their own, ordered from the large/general to the small/particular, or among the other topics as appropriate.
  2. Verify that your topics are complete and add topics, if necessary:
    • Do you have concept topics for all major elements in your task topics which explain what these elements are?
    • Do you have task topics for all concept topics which explain how to set up and operate the elements

If the topic sequence doesn’t flow nicely…

A common problem is that the topics don’t flow as nicely once you have chunked them into stand-alone pieces. In this case, add some “glue” topics which orient readers and ensure a good flow, for example, at the beginning of chapters in books. Consider including these glue topics only in print deliverables; maybe they are not necessary in your online help.

4. Rewrite headings to guide readers

Often your legacy section headings work in the context of your manual, but don’t give users enough orientation when they read just one or two topics. Rephrase them so users can quickly dip in and out of your documentation. Keep these guidelines in mind:

  • Reflect the user’s need or goal in the heading.
  • Phrase headings so users know what’s in the topic when they see only the heading in a link in another topic.
  • Try to make headings unique so there’s no confusion when they appear in search result lists.

5. Add links between related topics

Ensure that your topics have links or cross-references between all related topics:

  • Do your concept topics link to corresponding task topics – and vice versa?
  • Do your task topics include or link to prerequisites and next steps?
  • Do your task topics link to corresponding reference topics?

Your turn

Do you find these steps helpful? Have you converted documentation into topics before? Please leave a comment.