STC13: Alyson Riley about effective IA

In her session “Building Effective IA Teams in Resource-Challenged Times”, Alyson Riley from IBM offered her take on the recent theme that tech comm needs to “speak business” to prove its worth. (This is part of my coverage of the STC Summit 2013 in Atlanta.)

Alyson argued that “nice to have” initiatives are no longer compelling enough to get tech comm a budget or a mandate. To play a mission-critical role in a corporation, tech comm must plug into the corporate strategy. However, that strategy and its stakeholders usually isn’t waiting for us to put in our two cents. So we tech comm’ers must:

  1. Focus on corporate strategy as opposed to tactics.
  2. Play to the motivations behind the strategy, so we can come up with ways to support it with our unique skills and contributions.

The following moves can help with that second step:

  • Address the “buyer evaluates” and “buy” stages of the product. Usually, we speak to the “customer uses” stage of our product where there’s often more cost than income. The challenge is to make it compelling for buyers and sales to also use our content to their benefit in the more profitable stages. A good start is to ask sales: “What is the hardest part of your job?” and see if we can help them with the information we provide.
  • Influence social content to help leads along the marketing funnel from awareness to loyalty and advocacy. That doesn’t mean to “sell out” completely to marketing. It’s often as easy and sensible as including customer benefits in our content. Simply add the “why” to the “how” and give clients a chance to understand and promote your product.

Both moves boil down to the same principle: Don’t educate stakeholders in sales, marketing, product management, etc. about the product. Instead, imagine what the success of these respective stakeholders looks like and address that:

  1. Analyze opportunities your product can address in the terms of sales and marketing.
  2. Craft an effective story that centers on your content and how it can drive revenue, sales, customer satisfaction and loyalty.
  3. Prove it with metrics that speak to the stakeholders.

When it comes to metrics, page views of documentation usually don’t impress managers much. Instead, Alyson suggested “time-to-value” (TTV) which measures the customer’s time from buying or paying for the product to the moment they reap value from it. This is similar to “return-on-investment”, but TTV can be clearer to measure when investment consists of one-time payments plus maintenance fees. Also, it’s easier for tech comm to favorably influence TTV… 🙂

Advertisements

First day at tekom12

This is my second year at tekom, the world’s largest tech comm conference held annually in Wiesbaden, Germany. tekom is nominally a German conference that coincides with its international sibling conference tcworld in English. As the hashtag confusion on twitter shows once again, the English tech comm scene tends to use both names. (Which makes me wonder why the organizers don’t simply use the tekom name for the whole thing which has sessions in English and German…?)

My session on meaning in tech comm

I skipped the morning sessions, since I was feeling a little under the weather. I didn’t even get to tekom until around 1 pm, but in plenty of time for my own presentation on How our addiction to meaning benefits tech comm. I had submitted two very different talks, and I thank the organizers that they picked the “wacky” one. And to my surprise, I had about 100 people interested in meaning, semiotics and mental models! I thought the talk went well. I had some nice comments at the end and some very positive feedback on twitter afterwards.

You can find my slides on Slideshare and on the conference site. Sarah Maddox has an extensive play-by-play write-up of how my session went on her blog.

Content Strategy sessions

Scott Abel has put together a very good stream of content strategy sessions, where I attended the presentations of Val Swisher and Sarah O’Keefe (I also blogged about Sarah’s presentation). I’m not sure if my observation is accurate, but it seemed to me that there was less interest and excitement about this stream this year than at the premiere last year. As befits content strategy, both sessions I attended were strategic, rather than operational, so they dealt primarily with how tech comm fits into the larger corporate strategy.

Marijana Prusina on localizing in DITA

Then I went to hear Marijana Prusina give a tutorial on localizing in DITA. I have no first-hand experience with DITA, but I use a DITA-based information model at work, so this gave me a reality check of what I was missing by not using the real thing. Seeing all the XSLT you get to haggle with in the DITA Open Toolkit, I cannot exactly say that I regret not using DITA.

Beer & pretzels

Huge thanks to Atlassian and k15t who sponsored a reception with free beer & pretzels – and even t-shirts if you left them your business card. This coincided with the tweet-up. It was good to see tech comm colleagues from around the world (Canada, the US, Australia, France and Germany, of course). Some I had known via twitter or their blogs for a while, so it was a welcome chance to finally meet them in person.

– For more, many more session write-ups check out Sarah Maddox’ blog!

– So much for the first day, two more to come. I’m looking forward to them!

Turning tech comm into a biz asset by Sarah O’Keefe

Turning technical communications into a business asset, according to Sarah O’Keefe, is mainly about justifying cost; it is necessary, but possible. Her session at tekom12 was part of the Content Strategy stream, presented as last year by Scott Abel.

How expensive is your documentation – really?

Much progress in a tech comm department gets stumped when we, the tech writers, say: “Ah, that’d be great – but they’ll never pay for it!” What that really means is: “‘They’ don’t see the value (or the urgency).” So to prove the value behind tech comm, we need to justify how we can either save money (by reducing effort) or how we can generate additional revenue (by producing value that exceeds our cost).

Sarah points out several way to do this:

  • Show how tech comm can address legal or regulatory issues. Avoiding lawsuits is a great way to save your employer’s money!
  • Control the real cost of tech comm, because “cheap can be very expensive”: Yes, you may get something akin to documentation from a secretary or an intern, but…
    • Is your documentation efficient to maintain?
    • Does it scale or allow publication in other formats?
    • Does it actually satisfy your customers and support your brand – or does it stab your corporate value statement in the back?

Cost containment strategies

Sarah mentioned several strategies to control documentation cost.

The first bunch has to do with efficient content development:

  • Reuse as much content as possible: Write once, use many times, either in different places of the same format or in different output formats.
  • Automate formatting: Manually handcrafted formatting of deliverables can be a huge cost factor. It’s not uncommon for tech writers to spend 20% of their time (and hence a sizable chunk of money) on formatting output. Automate this, by relegating format either to templates or CSS.
  • Localization scales content efficiencies: Localizing or translating your content will be all the more inefficient, the more inefficiencies you have in your original documentation processes. This applies to content reuse, inefficient content variants and formatting.

Then there’s cost reduction outside of the tech comm team, for example, in tech support:

  • Consider whether your documentation is good enough to deflect the maximum possible number of support calls. Anything that users cannot find in the documentation, whether it’s missing or unfindable, drives up costs for your tech support staff.
  • Ensure your tech support staff has access to your documentation in formats they can work with efficiently. Downloading and then opening a document of 10 or 20 MB, is not only clumsy in its own right, it’s also likely that it doesn’t present the required information in the most efficient way…
  • Ensure your documentation content is actually useful to tech support staff: It must not only be accurate, but also up-to-date. Consider the nightmare in terms of costs and maintenance if tech support spun off their own documentation to augment the “official documentation”. Instead, invite them to contribute to the documentation you create.

Make documentation more strategic

Then there are a few strategies to make documentation more strategic, or rather, more strategically valuable:

  • Ensure your documentation is not only searchable (so it’s captured by publicly accessible search engines), but also findable (so people know where and how to get to it) and discoverable (so people link to it, from blogs or forums or twitter or the like).
  • Align tech comm to larger business goals: Find a corporate goal, preferable one that is tied to revenue to be made or cost to be avoided and show: If the tech comm team did this, it could contribute approximately that much money (in savings or additional revenue) to that larger corporate goal.

Conclusion

Sarah’s talk was geared towards the strategic angle of tech comm, but succeeded in making valuable points very clearly. Whether you can actually apply her advice in your situation may depend on how much managers with budget control feel the pain of improving tech comm.

Scott Abel on Structured Content at TCUK12

Scott Abel delivered his keynote It’s All About Structure! Why Structured Content Is Increasingly Becoming A Necessity, Not An Option in his usual style: Provocative, but relevant, fun and fast-paced (though he said he was going to take it slow). He even channeled George Carlin’s routine on Stuff: “These are ‘MY Documents’, those are YOUR documents. Though I can see you were trying get to MY Documents…”

His style doesn’t translate well onto a web page, so I’ll restrict myself to his 9 reasons Why Structured Content Is Increasingly Becoming A Necessity:

  1. Structure formalizes content, so it can guide authors who need to make fewer decisions when writing it. It also guides readers who can find more easily where the relevant information is in the whole documentation structure or within a topic. And it guides computers which can extract relevant information automatically and reliably.
  2. Structure enhances usability by creating patterns that are easy to recognize and easy to navigate with confidence.
  3. Structure enables automatic delivery and syndication of content, for example, via twitter – and you’ll be surprised occasionally when and how other people syndicate your “stuff”.
  4. Structure supports single-sourcing which means you can efficiently publish content on several channels, whether it’s print or different online outputs, such as a web browser, an iPad or a smartphone.
  5. Structure can automate transactions, such as money transfers, whether they are embedded in other content or content items in their own right.
  6. Structure makes it easier to adapt content for localization and translation, because you can chunk content to re-use existing translations or to select parts that need not only be translated but localized to suit a local market.
  7. Structure allows you to select and present content dynamically. You can decide which content to offer on the fly and automatically, depending on user context, such as time and location.
  8. Structure allows you to move beyond persona-ized content. This is not a typo: Scott doesn’t really like personas. He thinks they are a poor approximation of someone who is not you which is no longer necessary. With structured content (and enough information about your users) you can personalize your content to suit them better than personas ever let you.
  9. Structure makes it much easier to filter and reuse content to suit particular variants, situations and users.

Top 4 steps from manuals to topics

A little bit of planning ensures you get clean, manageable topics from your conversion of user manuals.

While most help authoring tools support importing Word documents, there’s more to getting re-usable topics out of user manuals, as I’ve found out. I’ve spent the last few weeks converting 3 related Word manuals of 360 pages into 400 topics in Madcap Flare – though I believe that the process below applies to other tools as well.

The aim was to merge the contents from separate Word-to-PDF manuals with the online help topics into a single sourcing repository from which we can create both online help and manuals.

My two key lessons of the conversion are:

  • Plan first, execute second – several hundred topics are too many for trial & error and picking up the pieces later.
  • Do each task as early as possible – some Word idiosyncrasies are hard to clean up after the conversion.

And here’s how I did it in 4 steps:

 

1. Start with plans

The whole conversion exercise benefitted much from a couple of designs that I followed:

  • An information model
  • A folder structure for my topics

The information model defines the 4 topic types we have and what each type contains internally. It’s basically “DITA, without the boring parts” about which I blogged previously.

The folder structure divides my one Flare project into several sub-folders, so I don’t have 400 topics in one heap. Instead, I now have 13 sub-folders which divide up my topics by topic type (concept, task or reference) and even by task type (initial setup or daily workflow). That makes it easier to manage the topic files.

2. Prepare for the import

Once I had the basic structure to organize topics and their insides, I prepared my Word manuals, so I didn’t have to deal with a GIGO situation, where I get Garbage In, Garbage Out.

First, I edited the documents into topics, so each section could become either a concept, task or reference topic – or an auxiliary topic which ensures that the chunks still flow nicely when you read them in the future manual output. I also ensured that section headings indicate topic contents and type:

  • Concept topics use noun phrases as headings
  • Task topics start with an imperative

Then, I cleaned up the documents. You can convert unstructured Word with layout applied in styles, modified styles and manual formatting into topics just fine, but it will give you unmanageable content and endless grief. So do your future self a favor and dissolve all modified styles and manual formatting.

3. Import

Thus prepared, I’ve found that Flare’s built-in Word import is very good, consistent and reliable if you throw well-structured Word documents at it. Only tables didn’t import well (or I couldn’t figure out how to do it), so I re-styled them in Flare.

If you’re a stickler for clean topics, you can go ahead in Flare and clean out unnecessary remnants:

  • Remove Word’s reference tags in cross references by replacing *.htm#_Ref1234567″ with *.htm”
  • Remove Word’s Toc tags in Flare’s table of contents by replacing *.htm#_Toc1234567″ with *.htm”
  • Remove Word’s Toc anchors in topics by deleting <a name=”_Toc*”></a>

4. Adding value to topics

At this point, I had a pile of 400 clean topics, but no added value from the conversion yet. That came from additional tasks:

  • Dividing up topic files into the folder structure, which makes hundreds of topic files manageable.
  • Assigning a topic type to topic files (Flare lets you do that for several files at once, so this was very fast), which makes the content intelligent, because topics “know” what they are.
  • Assigning in-topic elements (as div tags) to topic paragraphs according to the information model, which allows you to identify and reuse even parts of topics, for example, instruction sections or example sections.
  • Creating relationship tables for cross-references into relationship tables where feasible, which ensures that links are easier to manage and to keep up to date.

Your turn

Have you done a similar conversion? What were your experiences? Did you do it yourself or with an outside consultant? Feel free to leave a comment.

A. Ames & A. Riley on info experience models at STC12

Andrea Ames and Alyson Riley, both from IBM, presented a dense whirlwind tour on “Modelling Information Experiences” that combine four related models into a heavy-duty, corporate information architecture (IA).

While the proceedings don’t include a paper on this session, Andrea posted the slides, and the presenters published a related article (login required) “Helping Us Think: The Role of Abstract, Conceptual Models in Strategic Information Architecture” in the January 2012 issue of the STC’s intercom journal.

The session proceeded in six parts. First, Alison explained IA models in general and how they work. Then Andrea described each of the four model types that make up an IA specifically.

IA models as science and art

Information architecture relates to science as its models draw on insights and theories of cognition. And its models relate to art as they aim to create a meaningful experience. Both aspects are important. Only if IA models manage to blend science and art can they touch the head and the heart.

The session focuses on IA models instead of theories (which are too vague and abstract) or implementations (which are too specific and limited). Thanks to the in-between position of IA models, we can use them to

  • Ask the right questions to arrive at a suitable, feasible IA
  • Tolerate the ambiguities of “real life”

Models present descriptive patterns, not prescriptive rules. They don’t say how stuff must be, but how it can be represented. They differ from real life, but real life is still recognizable in them.

That means you cannot simply implement a model on autopilot and get it right. Instead, you have to

  • Think how to implement the model
  • Vary the model for your users’ benefit
  • Listen to the right knowledgeable people when implementing

Models help you think

To arrive at your concrete IA, you take the model’s abstract patterns and apply your project-specific details to them, the who, what, where and when.

This task is less mechanical and less copy-and-paste than it sounds. It’s not so much a question of following rules and recipes, but of making abstract patterns come to life in a coherent, flexible whole. (If you’ve ever tried to create meaningful concept or task topics by following an information model, you know it’s more than just filling in a DITA template. You need to use your own judgment about what goes where to achieve and maximize user benefit.)

Since there are four related models, you need to think carefully how each of these models should benefit your users. And the models help you think, they scale and adapt to:

  • How your business and its information requirements develop over time, how they grow and expand in desired directions
  • How your customers find, understand and apply the information they need

The four IA model types

The IA model that Andrea then explained consists of four related model types:

use model (content model + access model = information model)

Each of these model types needs to be developed and validated separately.

The use model defines how users interact with information. It describes standard scenarios for optimal user experience within the product or system context. It outlines what information users need and why and how they use it. It includes use scenarios for the entire product life cycle and user personas that outline different types of users. Fortunately for us technical communicators, Andrea pointed out, describing all this is part of our core skill set.

The content model defines how information is structured effectively. This could be DITA (in the case of the company I work for, this is what we call our DITA-derived “information model”). It includes the granular information units and their internal structure, for example, task topics and their standard sequence of contained information. It also describes how these granular units are combined into deliverables.

The access model defines how users access the information efficiently. It makes provided information useable, thanks to a navigation tree, a search function, a filtering function to increase the relevance of search results, an index, etc. Artefacts of this model type are wireframes, storyboards, a navigation tree and the like.

The information model defines how users get from one stage to the next. It uses the other three model types as input and fills in the gaps. It combines the content and access models which probably work fine during the installation and configuration processes, but may not yet carry a user persona from one stage to the next. As such, the information model is sort of the auxiliary topic that you sometimes need to insert between concept, task and reference topics to make a complete book out of them. At the same time, this model type is more detailed than the use model and closer to the other two types.

My takeaways

I was extremely grateful for this session and rank it among the top 3 I’ve seen at the summit – or any tech comm conference I’ve been to! Yes, it was fairly abstract and ran too long – my only complaint is that it left only 2 minutes for discussion at the end.

As abstract as much of the session was, it actually solved a couple of problems I couldn’t quite put into words. After designing and teaching our company’s DITA-derived information model (which is a “content model” by this session’s names), I thought our model was applicable and useful, but it had two problems:

  • Our model lacked context. – Now I know that’s because we haven’t mapped out our use model in the same detail and failed to connect the two.
  • Our model was baked into a template for less experienced writers to fill in – with varying results. – Now I know that’s because the models are not supposed to provide templates, but require writers to use their own judgment and keep in mind the big picture to deliver effective information.

Another lesson I learned is that many structured information paradigms seem to include a less rigid element that sweeps up much of the miscellaneous remnants. In DITA, there’s the general topic which is used for “under-defined” auxiliary topics to give structure, especially to print deliverables such as manuals. In this IA model, there’s the information model which fills the gaps and ensures a more seamless user experience than the other three models can ensure.

– For an alternative take, see Sarah Maddox’ post about this session.

Top 5 reasons I look forward to the STC12 Summit

I’ll be going to my first STC Summit in a couple of weeks and I’m already really excited about it. Here are my top 5 reasons and motivations:

1. Learn about new trends

The obvious reason to attend a conference: Many of the 80 sessions cover new industry trends – or at least topics that are new to me. We’re currently implementing a new HAT which brings a a lot of opportunities and some challenges, so I’m looking forward to:

2. Find inspiration and solutions

The sometimes unexpected benefit: At previous conferences, I frequently got ideas about improving a broken process or solving an irritating problem, even if that was not the main focus of a session. Such insights might come from an aside comment or something I see on a slide that inspires me to connect the dots. That’s why I’m looking forward to:

3. Present my own session

A highlight for will be Pattern Recognition for Technical Communicators!

My STC Summit speaker button

I’ll be on Wednesday morning at 8:30. I know that’ll be difficult after Tuesday’s banquet and whatever after-hours may transpire. But it’s actually a very good time!

  • A good time for you, because you can ease into the last day with an entertaining session that gives you a different, thought-provoking perspective on what you do anyway.
  • A good time for me, because I can get a feel for the conference on Monday and Tuesday and then get it out of the way firsrt thing on Wednesday. So I hope to see you there!

The conference program

After teasing you about several interesting sessions, here’s the complete conference program:

  • In a website, sortable by track, time, speaker or session code
  • In PDF, sorted by day and time, with session codes and titles only
  • In Excel 97-2003, sorted by day and time, with titles and main presenter

The first two are the official resources from the summit website, the spreadsheet is from me. All three are current as of May 6, but only the first one will be up to date in case of changes (an updated PDF may have a different link…). To be on the safe side, check the official summit website. – Now back to the reasons…

4. Meet old friends, make new friends

The pleasant side effect also called “networking”: As much as I enjoy social media as a virtual lifeline to stay in touch with the techcomm community, nothing beats meeting in person over a beer once or twice a year. So I’m looking forward to meeting speakers and delegates, tweeps and blog readers!

5. See Chicago

The tourist bit: I know Chicago a little bit from when I went to UW Madison in the 1990s. But I haven’t been in a while, and I’m especially looking forward to visiting the Art Institute and the new Modern Wing – or at least new to me. 🙂

6. Shop around for help authoring tools

Your bonus reason. The company I work for is not in the market right now for a new tool, but maybe you are. With more than 50 product and service providers exhibiting, you’ll have an excellent chance to see a lot of products up close and compare them closely. It’s a little like meeting friends: Nothing beats a first hands-on experience, and it’s a lot less daunting when you don’t have to install a trial version and click your way around. Vendor exhibitions at conferences were essential for us when we were choosing our tool.

7. Deep dish pizza

The gourmet reason. Thanks to Larry Kunz for the reminder, see his comment below. I was quite fond of Pizzeria Uno in my Madison days…

– If I forgot a reason to go to a conference, please share it below. If you’re attending the STC Summit, I hope to meet you in Chicago!