2nd day of sessions at TCUK 13

The business and managing of tech comm was the predominant topic of my TCUK13 experience, as I reflect some more on the sessions I attended and the conversations I joined.

A. Westfold on collaborative authoring in DITA

Andrew presented a case study of McAfee over several years, from separate product teams and “artisanal”  lone writers to a larger, unified team of writers collaborating in DITA. During this time, McAfee also grew by acquisitions which meant that additional writers, methods and tools came on board. Here are the most essential stages of their journey:

  1. Improve several individual procedures for quick wins: Single sourcing reduced translation efforts. Automating the translation round-trip cut out costly manual layout efforts.
  2. Move to topic-based authoring: They chunked up content into topics and moved them into DITA to validate the topic structure. (It turned out that many task topics could not be automated and essentially had to be rewritten in valid structure.)
  3. Bring in a content management system to reap the full benefit from single sourcing and topic-based authoring. This helped to reduce the number of redundant topics and to make localization even more efficient.

While their journey is far from finished, McAfee has realized the following benefits so far:

  • Easier administration of topics than of larger content chunks before. It’s also easier to solicit reviews for smaller stand-alone chunks.
  • Faster, more consistent creation of deliverables for several product variants thanks to better use of standard templates.
  • Documentation processes align well with recently introduced agile development processes.
  • More efficient, streamlined workflow thanks to better integration between documentation and localization.

I really enjoyed Andrew’s presentation. It showed that projects to improve tech comm do work out, even if you don’t always see past the next stage, and you may have to adopt due to other changes in the company.

A. Warman on “Managing accessible mobile content”

Adrian Warman from IBM hooked up two important tech comm issues, accessibility and documentation for mobile, into a survey session.

Accessibility makes it easier for everyone to fit in, participate and contribute, irrespective of disabilities. In short, it ensures that a user’s disability does not mean a personal disadvantage. For tech comm, this means that sufficient documentation is accessible. For example, if your online help in HTML is accessible, it’s not necessary to make the same contents in PDF accessible as well – or vice versa, as the case may be. Adrian advised us to keep an eye on “EU mandate M 376” which may soon make some level of accessibility mandatory for products traded within the EU.

Mobile (smartphones and tablets) for tech comm means not just a technology, but an expectation, a mindset. It’s more than simply fitting our output onto smaller screens. Its different dimensions of interactivity, such as progressive disclosure and user-generated content, challenges us tech writers to re-think how to best convey an idea. Which is the best taxonomy that supports both, mobile devices and accessibility?

I don’t think there was a lot of new, revolutionary content here, but since I haven’t dealt much with either topic so far, it was a welcome introduction that was concise and well presented.

E. Smyda-Homa on useless assistance

Edward reported on his twitter project @uselessassist where he “Retweets to remind organizations of the frustration and negative emotions that result from poorly prepared assistance.” He presented many examples of poor user assistance. Some people complained about insufficient instructions, whether they had not enough images or only images. Some found the instructions too long (“I know how to prepare toast!”) or too short or redundant. Some pointed out typos or bad translations.

This was a very entertaining session – and you can easily get the gist of it by simply looking up the account or following the twitter feed. It’s anecdotal evidence in real-time that users actually do read the manual – or at least try to.

While every tweet is heartfelt, I think not every one merits a change in the documentation – if only because some are contradicting each other. But I find Edward’s project very enlightening and nodded to myself in embarrassed recognition a couple of times…

– Feel free to leave comments about any of the sessions, whether you have attended them or not.

Advertisement

Getting mileage from a tech comm mission statement

If you have a mission statement for technical communications, you can use it to anchor several strategic and tactical decisions. I’ve suggested a few general reasons Why you need a tech comm mission statement in my previous post. The valuable discussion that ensued led me to think we can get some mileage from a mission statement in some high-level tasks further downstream.

Consider a mission statement that says: “Our product help provides users with relevant product information at the right time in the right format.”

Defining audiences and deliverables

You can keep your audience in focus with a mission statement. Do you write for end users? Maybe there are different types, such as professionals vs. amateur hobbyists? Do you also address colleagues who expect to find internal information in the documentation? The mission statement above doesn’t specify it – and hence can be expected to address everyone who uses the product.

You can also derive your deliverables from a mission statement. Do you publish to several formats or only to one? What is your priority of formats? Web help first, PDF second seems a standing favorite that’s recently been disrupted by the emergence of mobile output. The mission statement above merely mentions the right format – so you need to figure out what format is right for your audience types. You can use personas to determine how your users work with the product – or better yet: Observe or survey them!

Defining information model and processes

You can derive your information model, the structural standard of your documentation, from your mission statement. This model should help you to reach the goal described in your mission and serve your audience. For example, topic-based architectures have long been popular. If you need to retrieve small chunks of information, for example to share steps in a task or exception handling advice, consider a more granular standard such as DITA.

Your processes should outline a repeatable, efficient and effective way to create your deliverables so they address your audience and, once again, help you to achieve your mission goal.

Your information model can suggest which topics or elements to create need to be created and updated for a given product or enhancement. Together with your processes, this makes it easier to plan and estimate documentation efforts – in theory at least…

– But with some management support and some persistence, a mission statement and some strategic decisions piggy-backed on to it can help you get out of the proverbial hamster wheel.

What do you think? Can this be helpful? Or is it too far removed from real life? Do you have any experience with a larger documentation strategy based on a mission statement? If so, did it work?

ROI of topic-based authoring and single sourcing

Breaking down content silos brings benefits and ROI to topic-based authoring, even if you have little or no translation. I’ve cut down time to write and maintain three deliverables by 30-40% by reusing topics.

Content silos

The company I work for supplies documentation for its software solution in different formats, among them:

  • Release notes inform customers about new features and enhancements in new versions.
  • User manuals describe individual modules of the product, how to set them up, how to operate them and what kind of results to expect from them.
  • Online help focuses on reference information for windows and fields, but has some overlaps with information in user manuals.

Content silos maintain separate contents per deliverable.Originally, these three deliverables were created and maintained in separate “content silos”, using separate tools and separate source repositories. So the documentation process looked like this:

  1. Write release notes in Word.
  2. Update or write user manuals in Word.
  3. Update the online help in a custom-built help tool that uses Word as an editor and Microsoft’s HTML Help Workshop to publish to Microsoft Compiled HTML Help (.CHM).

I’ve found that I could save some time by writing the release notes with the other deliverables in mind, so I could copy and paste content and reuse it elsewhere. For example, my release notes describe a new batch job which helps to automate a tedious workflow. If I describe the batch job in detail, the same content fits easily into the user manual. (Yes, it bloats the release notes, but at least the information is available at the release date, while we didn’t always manage to update the user manual in time.)

Copying and pasting worked even better once I structured the content in each of the three silos as topics. For example, a task topic from the release notes would fit almost gracefully among similar task topics in en existing manual.

But such manual copy-and-paste reuse is really not efficient or maintainable, because my stuff is still all over the place. I may write in – or copy to – four places, but then remember to update only two of them; enter inconsistency and its twin brother unreliability.

Getting real about reuse

To get the full benefits and ROI of topic-based authoring, we’ve found it’s not enough to simply write topics and keep your concepts separate from your tasks. We’ve had to adjust our documentation architecture, our tools and our process.

The guiding principle is: “Write once, publish many”. This tech comm mantra proved to be the key. We now aim to have each piece of information in only one topic. That unique topic is the place we update when the information changes. And that’s the topic we link to whenever a context requires that information.

Single sourcing is the best way to get a collection of unique topics into user manuals and online help. So we needed to consolidate our separate content silos into a single repository from which we can publish all our deliverables.

MadCap Flare is the tool we chose. It gives us a reliable, yet flexible way to maintain a common repository of topics. For each deliverable, such as release notes and user manuals in PDF and online web help, we simply create a new table of contents (TOC) which collects all topics that go into the deliverable.

With topic reuse, we define tables of contents to assemble topics per deliverable.

The writing process has changed considerably: Previously, I would focus on writing a release note entry or a chapter in a user manual. Now I find myself focusing on a specific task or concept and how to describe it as stand-alone content so it works for the user, whether it appears in a user manual or in the release notes.

The flexibilities of MadCap Flare’s conditions feature and of our DITA-based information model help us to accommodate the differences of our deliverables. We still write a few topics explicitly for a specific deliverable. For example, in release notes, short “glue” topics serve to introduce new concept topics and task topics to establish some context for the user and explain why a new feature is “cool”. In user manuals, an introductory chapter with a few topics explains what to find where and which sections to read for a quick start.

But most of the topics can now be used in release notes, user manuals and online help alike. Since I’ve gone from copy-and-paste in three content silos to single sourcing topics in Flare, the time to write and update documentation for my module has decreased by 30-40%. It’s on the lower end if a new version brings a lot of brand-new features. It’s higher if there are more enhancements of existing functionality.

First day at tekom12

This is my second year at tekom, the world’s largest tech comm conference held annually in Wiesbaden, Germany. tekom is nominally a German conference that coincides with its international sibling conference tcworld in English. As the hashtag confusion on twitter shows once again, the English tech comm scene tends to use both names. (Which makes me wonder why the organizers don’t simply use the tekom name for the whole thing which has sessions in English and German…?)

My session on meaning in tech comm

I skipped the morning sessions, since I was feeling a little under the weather. I didn’t even get to tekom until around 1 pm, but in plenty of time for my own presentation on How our addiction to meaning benefits tech comm. I had submitted two very different talks, and I thank the organizers that they picked the “wacky” one. And to my surprise, I had about 100 people interested in meaning, semiotics and mental models! I thought the talk went well. I had some nice comments at the end and some very positive feedback on twitter afterwards.

You can find my slides on Slideshare and on the conference site. Sarah Maddox has an extensive play-by-play write-up of how my session went on her blog.

Content Strategy sessions

Scott Abel has put together a very good stream of content strategy sessions, where I attended the presentations of Val Swisher and Sarah O’Keefe (I also blogged about Sarah’s presentation). I’m not sure if my observation is accurate, but it seemed to me that there was less interest and excitement about this stream this year than at the premiere last year. As befits content strategy, both sessions I attended were strategic, rather than operational, so they dealt primarily with how tech comm fits into the larger corporate strategy.

Marijana Prusina on localizing in DITA

Then I went to hear Marijana Prusina give a tutorial on localizing in DITA. I have no first-hand experience with DITA, but I use a DITA-based information model at work, so this gave me a reality check of what I was missing by not using the real thing. Seeing all the XSLT you get to haggle with in the DITA Open Toolkit, I cannot exactly say that I regret not using DITA.

Beer & pretzels

Huge thanks to Atlassian and k15t who sponsored a reception with free beer & pretzels – and even t-shirts if you left them your business card. This coincided with the tweet-up. It was good to see tech comm colleagues from around the world (Canada, the US, Australia, France and Germany, of course). Some I had known via twitter or their blogs for a while, so it was a welcome chance to finally meet them in person.

– For more, many more session write-ups check out Sarah Maddox’ blog!

– So much for the first day, two more to come. I’m looking forward to them!

Top 4 steps from manuals to topics

A little bit of planning ensures you get clean, manageable topics from your conversion of user manuals.

While most help authoring tools support importing Word documents, there’s more to getting re-usable topics out of user manuals, as I’ve found out. I’ve spent the last few weeks converting 3 related Word manuals of 360 pages into 400 topics in Madcap Flare – though I believe that the process below applies to other tools as well.

The aim was to merge the contents from separate Word-to-PDF manuals with the online help topics into a single sourcing repository from which we can create both online help and manuals.

My two key lessons of the conversion are:

  • Plan first, execute second – several hundred topics are too many for trial & error and picking up the pieces later.
  • Do each task as early as possible – some Word idiosyncrasies are hard to clean up after the conversion.

And here’s how I did it in 4 steps:

 

1. Start with plans

The whole conversion exercise benefitted much from a couple of designs that I followed:

  • An information model
  • A folder structure for my topics

The information model defines the 4 topic types we have and what each type contains internally. It’s basically “DITA, without the boring parts” about which I blogged previously.

The folder structure divides my one Flare project into several sub-folders, so I don’t have 400 topics in one heap. Instead, I now have 13 sub-folders which divide up my topics by topic type (concept, task or reference) and even by task type (initial setup or daily workflow). That makes it easier to manage the topic files.

2. Prepare for the import

Once I had the basic structure to organize topics and their insides, I prepared my Word manuals, so I didn’t have to deal with a GIGO situation, where I get Garbage In, Garbage Out.

First, I edited the documents into topics, so each section could become either a concept, task or reference topic – or an auxiliary topic which ensures that the chunks still flow nicely when you read them in the future manual output. I also ensured that section headings indicate topic contents and type:

  • Concept topics use noun phrases as headings
  • Task topics start with an imperative

Then, I cleaned up the documents. You can convert unstructured Word with layout applied in styles, modified styles and manual formatting into topics just fine, but it will give you unmanageable content and endless grief. So do your future self a favor and dissolve all modified styles and manual formatting.

3. Import

Thus prepared, I’ve found that Flare’s built-in Word import is very good, consistent and reliable if you throw well-structured Word documents at it. Only tables didn’t import well (or I couldn’t figure out how to do it), so I re-styled them in Flare.

If you’re a stickler for clean topics, you can go ahead in Flare and clean out unnecessary remnants:

  • Remove Word’s reference tags in cross references by replacing *.htm#_Ref1234567″ with *.htm”
  • Remove Word’s Toc tags in Flare’s table of contents by replacing *.htm#_Toc1234567″ with *.htm”
  • Remove Word’s Toc anchors in topics by deleting <a name=”_Toc*”></a>

4. Adding value to topics

At this point, I had a pile of 400 clean topics, but no added value from the conversion yet. That came from additional tasks:

  • Dividing up topic files into the folder structure, which makes hundreds of topic files manageable.
  • Assigning a topic type to topic files (Flare lets you do that for several files at once, so this was very fast), which makes the content intelligent, because topics “know” what they are.
  • Assigning in-topic elements (as div tags) to topic paragraphs according to the information model, which allows you to identify and reuse even parts of topics, for example, instruction sections or example sections.
  • Creating relationship tables for cross-references into relationship tables where feasible, which ensures that links are easier to manage and to keep up to date.

Your turn

Have you done a similar conversion? What were your experiences? Did you do it yourself or with an outside consultant? Feel free to leave a comment.

DITA with confidence (DITA Best Practices book review)

I recommend DITA Best Practices: A Roadmap for Writing, Editing, and Architecting in DITA by Laura Bellamy, Michelle Carey, and Jenifer Schlotfeldt to anyone who looks for practical guidance with DITA or topic-based writing with a future DITA option. (This book review has appeared in the Summer 2012 issue of ISTC’s Communicator magazine on p. 57 in different format.)

Cover of the DITA Best Practices book

The DITA bookshelf has been growing slowly but surely. Thanks to the recent addition of the seminal DITA Best Practices, you can now find most information you need for a DITA implementation project in one book or the other.

The paperback comes from IBM Press which has also given technical writers Developing Quality Technical Information by Gretchen Hargis, et al. If you know that recommended title, you will enjoy the same usefulness and clear layout in this new book.

Starting with topics

DITA Best Practices addresses the practical concerns of writers, editors and architects of DITA content in three well-structured parts. The first part on writing starts with a chapter on topic-based writing and task orientation as two methods underlying DITA. The authors give clear instructions and guidelines for both methods. A generous amount of tips, best practices and ‘watch out’ warnings add the voice of the experienced practitioner which help to keep you on track or avoid beginner’s mistakes. The fictional ‘Exprezzoh 9000N’ coffeemaker is used consistently throughout the book to illustrate tasks and topics. Explanations why and how the methods work give writers the motivation to apply the advice with confidence. The chapter ends with a concise wrap-up section of the big points and a checklist to ensure you apply these big points in your work.

I have outlined the first chapter in such detail, because its clear and competent combination of elements — instructions, tips and warnings, examples, motivation, wrap-up and checklists — make this book so useful throughout.

One chapter each is then dedicated to topic types task, concept and reference. Each chapter describes the characteristics and motivation for the topic type, followed by instructions and examples along the standard DITA topic structure. The task chapter, for example, proceeds from <title> via <shortdesc>, <context>, <prereq> to <steps>, etc. However, most guidelines, examples, tips and warnings apply to good topic-based writing practices in general.

A chapter dedicated to DITA’s ‘short description’ element with its multiple uses in topics, links and search results helps novices with the challenge to use this powerful element correctly.

DITA’s architecture explained

The second part of the book builds on the first. After describing topics as DITA’s most essential building blocks, the book focuses on making topics work together by connecting them and by expanding their usability.

Two chapters show you how to connect topics into a coherent output, such as an online help system or a book. The first chapter on DITA maps explains how to create tables of contents, including bookmaps for print publications. The second chapter on links describes the four different ways to link topics to each other that in DITA. In their reassuring style, the authors help you to distinguish them, so you understand when to use which link type and how to apply each correctly.

The next three chapters explain how to make topics work together by expanding their usability: You can use metadata to make your topics ‘smart’ by adding information such as index terms, addressed audience or described product or version. You can use conditional processing to customise output. And you can reuse content for more consistent output and reduced translation costs. A clear workflow helps you to determine which of your content you can reuse and how.

Editing in DITA

The third part of the book deals with editing. One chapter outlines the steps and decisions of a project to convert your exiting content to DITA. Useful worksheets help you to analyse your content and prepare it for conversion. The chapter on code review helps you to avoid or eliminate common problems that restrict the benefit of your DITA code. Based on their experience, the authors remind you to use DITA topic types and elements correctly, for example, to use the <steps> element in task topics instead of a more generic ordered list. The chapter on content editing applies best practices of editing to DITA topics and maps.

Useful and recommended

Since it came out, I have used this book more than any other technical writing book, except a style guide. Had it been published earlier, it would have saved me many an uncertain moment when I was designing and teaching our information model. I especially appreciate the clarity, the concision and the well-argued advice of do’s and don’ts. For all its benefits, be aware that the book covers neither the DITA Open Toolkit nor DITA specialisations!

DITA Best Practices lives up to its subtitle and provides essential instruction and advice to technical writers, editors and information architects. Project managers will find it equally helpful but should also consider Julio Vazquez’ Practical DITA which reflects a project structure better. Decision-making managers are probably better off with Ann Rockley’s DITA 101 which gives a shorter high-level overview.

A. Ames & A. Riley on info experience models at STC12

Andrea Ames and Alyson Riley, both from IBM, presented a dense whirlwind tour on “Modelling Information Experiences” that combine four related models into a heavy-duty, corporate information architecture (IA).

While the proceedings don’t include a paper on this session, Andrea posted the slides, and the presenters published a related article (login required) “Helping Us Think: The Role of Abstract, Conceptual Models in Strategic Information Architecture” in the January 2012 issue of the STC’s intercom journal.

The session proceeded in six parts. First, Alison explained IA models in general and how they work. Then Andrea described each of the four model types that make up an IA specifically.

IA models as science and art

Information architecture relates to science as its models draw on insights and theories of cognition. And its models relate to art as they aim to create a meaningful experience. Both aspects are important. Only if IA models manage to blend science and art can they touch the head and the heart.

The session focuses on IA models instead of theories (which are too vague and abstract) or implementations (which are too specific and limited). Thanks to the in-between position of IA models, we can use them to

  • Ask the right questions to arrive at a suitable, feasible IA
  • Tolerate the ambiguities of “real life”

Models present descriptive patterns, not prescriptive rules. They don’t say how stuff must be, but how it can be represented. They differ from real life, but real life is still recognizable in them.

That means you cannot simply implement a model on autopilot and get it right. Instead, you have to

  • Think how to implement the model
  • Vary the model for your users’ benefit
  • Listen to the right knowledgeable people when implementing

Models help you think

To arrive at your concrete IA, you take the model’s abstract patterns and apply your project-specific details to them, the who, what, where and when.

This task is less mechanical and less copy-and-paste than it sounds. It’s not so much a question of following rules and recipes, but of making abstract patterns come to life in a coherent, flexible whole. (If you’ve ever tried to create meaningful concept or task topics by following an information model, you know it’s more than just filling in a DITA template. You need to use your own judgment about what goes where to achieve and maximize user benefit.)

Since there are four related models, you need to think carefully how each of these models should benefit your users. And the models help you think, they scale and adapt to:

  • How your business and its information requirements develop over time, how they grow and expand in desired directions
  • How your customers find, understand and apply the information they need

The four IA model types

The IA model that Andrea then explained consists of four related model types:

use model (content model + access model = information model)

Each of these model types needs to be developed and validated separately.

The use model defines how users interact with information. It describes standard scenarios for optimal user experience within the product or system context. It outlines what information users need and why and how they use it. It includes use scenarios for the entire product life cycle and user personas that outline different types of users. Fortunately for us technical communicators, Andrea pointed out, describing all this is part of our core skill set.

The content model defines how information is structured effectively. This could be DITA (in the case of the company I work for, this is what we call our DITA-derived “information model”). It includes the granular information units and their internal structure, for example, task topics and their standard sequence of contained information. It also describes how these granular units are combined into deliverables.

The access model defines how users access the information efficiently. It makes provided information useable, thanks to a navigation tree, a search function, a filtering function to increase the relevance of search results, an index, etc. Artefacts of this model type are wireframes, storyboards, a navigation tree and the like.

The information model defines how users get from one stage to the next. It uses the other three model types as input and fills in the gaps. It combines the content and access models which probably work fine during the installation and configuration processes, but may not yet carry a user persona from one stage to the next. As such, the information model is sort of the auxiliary topic that you sometimes need to insert between concept, task and reference topics to make a complete book out of them. At the same time, this model type is more detailed than the use model and closer to the other two types.

My takeaways

I was extremely grateful for this session and rank it among the top 3 I’ve seen at the summit – or any tech comm conference I’ve been to! Yes, it was fairly abstract and ran too long – my only complaint is that it left only 2 minutes for discussion at the end.

As abstract as much of the session was, it actually solved a couple of problems I couldn’t quite put into words. After designing and teaching our company’s DITA-derived information model (which is a “content model” by this session’s names), I thought our model was applicable and useful, but it had two problems:

  • Our model lacked context. – Now I know that’s because we haven’t mapped out our use model in the same detail and failed to connect the two.
  • Our model was baked into a template for less experienced writers to fill in – with varying results. – Now I know that’s because the models are not supposed to provide templates, but require writers to use their own judgment and keep in mind the big picture to deliver effective information.

Another lesson I learned is that many structured information paradigms seem to include a less rigid element that sweeps up much of the miscellaneous remnants. In DITA, there’s the general topic which is used for “under-defined” auxiliary topics to give structure, especially to print deliverables such as manuals. In this IA model, there’s the information model which fills the gaps and ensures a more seamless user experience than the other three models can ensure.

– For an alternative take, see Sarah Maddox’ post about this session.

Top 5 reasons I look forward to the STC12 Summit

I’ll be going to my first STC Summit in a couple of weeks and I’m already really excited about it. Here are my top 5 reasons and motivations:

1. Learn about new trends

The obvious reason to attend a conference: Many of the 80 sessions cover new industry trends – or at least topics that are new to me. We’re currently implementing a new HAT which brings a a lot of opportunities and some challenges, so I’m looking forward to:

2. Find inspiration and solutions

The sometimes unexpected benefit: At previous conferences, I frequently got ideas about improving a broken process or solving an irritating problem, even if that was not the main focus of a session. Such insights might come from an aside comment or something I see on a slide that inspires me to connect the dots. That’s why I’m looking forward to:

3. Present my own session

A highlight for will be Pattern Recognition for Technical Communicators!

My STC Summit speaker button

I’ll be on Wednesday morning at 8:30. I know that’ll be difficult after Tuesday’s banquet and whatever after-hours may transpire. But it’s actually a very good time!

  • A good time for you, because you can ease into the last day with an entertaining session that gives you a different, thought-provoking perspective on what you do anyway.
  • A good time for me, because I can get a feel for the conference on Monday and Tuesday and then get it out of the way firsrt thing on Wednesday. So I hope to see you there!

The conference program

After teasing you about several interesting sessions, here’s the complete conference program:

  • In a website, sortable by track, time, speaker or session code
  • In PDF, sorted by day and time, with session codes and titles only
  • In Excel 97-2003, sorted by day and time, with titles and main presenter

The first two are the official resources from the summit website, the spreadsheet is from me. All three are current as of May 6, but only the first one will be up to date in case of changes (an updated PDF may have a different link…). To be on the safe side, check the official summit website. – Now back to the reasons…

4. Meet old friends, make new friends

The pleasant side effect also called “networking”: As much as I enjoy social media as a virtual lifeline to stay in touch with the techcomm community, nothing beats meeting in person over a beer once or twice a year. So I’m looking forward to meeting speakers and delegates, tweeps and blog readers!

5. See Chicago

The tourist bit: I know Chicago a little bit from when I went to UW Madison in the 1990s. But I haven’t been in a while, and I’m especially looking forward to visiting the Art Institute and the new Modern Wing – or at least new to me. 🙂

6. Shop around for help authoring tools

Your bonus reason. The company I work for is not in the market right now for a new tool, but maybe you are. With more than 50 product and service providers exhibiting, you’ll have an excellent chance to see a lot of products up close and compare them closely. It’s a little like meeting friends: Nothing beats a first hands-on experience, and it’s a lot less daunting when you don’t have to install a trial version and click your way around. Vendor exhibitions at conferences were essential for us when we were choosing our tool.

7. Deep dish pizza

The gourmet reason. Thanks to Larry Kunz for the reminder, see his comment below. I was quite fond of Pizzeria Uno in my Madison days…

– If I forgot a reason to go to a conference, please share it below. If you’re attending the STC Summit, I hope to meet you in Chicago!

Linking topics: Cross-reference or relationship table?

Choose the appropriate reference type, cross-reference or relationship table, to link between topics so you and your readers get the most from your documentation.

When you’re moving from less-than-structured documentation to topic-based writing, one of the less apparent challenges is to link your related topics to one another. You could just keep on using cross-references, but then you’d miss out on some of the benefits of topics.

Whether you write topics using a standard like DITA or a tool such as MadCap Flare, you have a new cross-reference type, relationship tables. It is important to distinguish the two types, because each serves a unique purpose.

Cross-references

A cross-reference is the link you know from Word or other document-based writing: You create a link to a heading or a bookmark, it can show the heading title, and it updates automatically if that heading (or a page number) changes.

It leads readers from a certain point or condition to another place. It tells readers:

If you want to do or know that now, go over there.

So far, so good. This kind of link works well, if you have a document with an organised sequence. Occasionally, you need to offer the reader an occasional branching into two alternative secnarios or a jump to another place.

But when you convert your content into a pile of loosely connected topics, you have much more demand and more opportunities to relate topics.

Disadvantages of cross-references

Cross-references aren’t always the best way to relate topics because they:

  • Disrupt reading flow and orientation. For users, it’s fine to make the occasional choice between scenario A or B. But offering too many links will tempt many users to wonder what they’re missing at the other end. And following link after link from within one topic to the next quickly breaks the flow of tasks and leaves the user confused.
  • Create a web of dependencies. With cross-references, you tie your topics to one another in a certain preferred scenario you engineer. This scenario may not suit the user’s current need. It undermines the flexible “stand-alone” independence of topics that supports multiple use cases. And it makes it harder for writers to reuse them.

So while cross-references are easily a preferred way of linking contents in document-based writing, consider carefully how they will affect your use, and the users’ benefits, of your topics.

Cross-references in topics work well in these cases:

  • Link to mandatory pre-requisites or required next steps.
  • Link to a series of tasks in an overview/parent topic.

In either case, the user pretty much must follow the link to achieve anything useful, so cross-references are fine. Just make sure that the link and the surrounding text are meaningful, so users can decide whether they should follow the link.

– So if cross-references are not always recommended, how else can I link between topics?

Relationship tables

A relationship table is best to indicate that certain topics as a whole are related. It tells readers:

If you’re involved with this topic, you should also be aware of those topics.

For users, links from relationship tables appear separately, usually below the actual topic text in a section of related links or “see also”, depending on how you choose to style them.

For writers, under the hood, a relationship table is a separate file that lists by type which topics are related to one another. For example, I could have a table like this:

Concept topics Task topics Reference topics
Income taxes Calculate income taxes Income tax deadlines
File income taxes Addresses of tax offices

This means that the topics are related as a whole. And they will remain related, even if you update one of them by adding, changing or deleting a paragraph.

This is a pretty new concept if you’re used to writing long single documents. And it might feel awkward to have references outside and removed from the linked topics.

Advantages of relationship tables

Once you wrap your head around the idea, relationship tables have several advantages:

  • Keep your topic text flexible. With such a table, you don’t lock your topic into a certain scenario as a cross-reference does. A cross-reference establishes a fixed connection – which might be irrelevant or not even available for certain users or product versions. It’s much easier to drop a topic in a relationship table where it will not appear if it doesn’t exist for certain users or products.
  • Keep your references complete and up-to-date. With tables, it’s much easier to oversee the complete set of links and relationships than with cross-references inside topics. If you’ve ever tried to manually update and rephrase links to a new important topic which has replaced an obsolete topic in countless places, you will appreciate a table where you can simply add or omit any one topic.

Relationship tables are not superior to cross-references. They simply serve a different purpose. I hope this post helps you to appreciate the benefits of either type and to decide when to use which. Please leave a comment to let me know if I’ve succeeded or have been wrong or unclear somehow.

For more about converting documentation to topics, see previous posts about:

Concept or reference, what’s the difference?

The distinction between concepts and reference topics is much easier and clearer when both support strong and clear task topics.

Concept or reference?

One of the recurring difficulties when moving to structured writing and topic-based authoring is the distinction of topic types concept and reference. It’s an odd problem because the three topic types, concept, task and reference seem rather logical and clear-cut in theory.

I’ve found that the best remedy for the confusion is the motivation that lies beneath topic-based authoring: Task orientation. Think of it this way:

  • Task orientation is a design strategy for your documentation
  • Topic-based authoring is “only” the method to implement task orientation.

So concept topics and reference topics exist to support tasks. “The goal for users … is not to understand a concept but to complete a task.” (p. 41, DITA Best Practices: A Roadmap for Writing, Editing, and Architecting in DITA).

Let tasks lead the way

Much of the uncertainty whether a topic is a concept or a reference disappears, when you have strong, solid task topics in place. Topics that directly address your users and their daily tasks and help them to get their job done:

  • How do I create cool espresso drinks with my new coffee machine?
  • How do I clean the milk steamer?

Such tasks are the context in which the two definitions of concept and reference from the DITA 1.2 specification make sense:

  • Concepts “provide conceptual information to support the performance of tasks”.
  • Reference topics “provide for the separation of fact-based information from concepts and tasks”

Note that both definition recur to tasks, so task-orientation and tasks as above are at the heart of topic-based authoring.

To help readers understand the background to tasks, you can offer concepts about the kinds of espresso drinks there are and how they differ.

To support users to actually perform the tasks, you can offer reference topics with technical specifications such as required voltage and recommended water softness.

How to distinguish them?

If you’re in doubt in particular examples, maybe the table below can help you. I got some of the criteria from a Yahoo group discussion “Concept v Reference – Battle to the Death” and from a blog post on Dubious Prospects.

Concept topics Reference topics
Are abstract ideas Are specific settings
Explain meaning or benefit Give facts without much explanation
Can stay when specifications change Change with specifications
Support understanding of tasks Support correct execution of tasks
Are read for background information Are read for detailed information
“Why brushing your teeth is important” “Stages of tooth decay”