STC13: David Pogue’s keynote spech

David Pogue, technology columnist for the New York Times, kicked off the STC Summit 2013 with his keynote. He looked back to his previous keynote at the 2009 summit and forward to what future developments in technology and materials science might bring. (This is part of my coverage of the STC Summit 2013 in Atlanta.)

Alan Houser, President of the STC, introduced him as the “most publicly visible technical communicator on the planet”. David started with a recap of his earlier address. He explained (again) how up to 2009, the acceleration in technological developments had led to a challenge and a paradox. The challenge concerns hardware where machines have become smaller and smaller, while our means to operate them, namely our fingers, have remained essentially the same size. The paradox occurs in software where companies often justify the most recent upgrade by piling on yet more new features – without necessarily having a good place to put them.

David Pogue delivering the keynote speech at the STC Summit 2013. Photo by Nitza Hauser.

Out of this challenge and this paradox comes the unexpected situation where a company such as Apple can achieve a competitive advantage by successfully eliminating features in a device such as the iPod. This cult of simplicity sells, and the product or service with fewer buttons win, whether it’s an iPad or the Google start search screen.

The reason why this works is psychological, says David: Achievements give us joy and make us feel that, yes, I can do this, and for this, I’m a good person. Conversely, not understanding how stuff or works or why stuff is so weird terrifies us.

Which brought David to Windows 8 and his task to write book for his “Missing Manual” series about the operating system. David made it clear that he appreciates much about Windows 8 – however, there are certain features that drive him crazy because they task him with documenting something that makes no sense – a feeling many tech comm’ers know well.

Specifically, Windows 8 presents two versions of many applications, two browsers, two e-mail clients, etc. – one in the GUI with tiles and one in the regular desktop. In the tiles GUI, there are no folders or files – and the control panel with system settings is only available via search.

Okay, so David decided to document the two GUIs in two separate parts of his book. Which raised the question how you call each GUI. The desktop is the desktop. But what is the GUI with tiles called? It started out as “Metro” until a German retail chain of the same name threaten to sue Microsoft. The “Modern UI” moniker was internal Microsoft lingo only. So David asked Microsoft directly. The GUI with tiles it turns out is called – “Windows 8”! As is the operating system in which the GUI with tiles and the desktop both live…

This didn’t make sense to David, so he invented the name “TileWorld” – and the name stuck! (… it does sound like a DIY store for bathrooms to me…)

David thought the main issue was the decision to combine the two GUIs. The common desktop is a more cumbersome but runs all applications and is known to most Windows users today. “TileWorld” has its advantages in a mobile tablet world, but is unsuitable for many uses such as drawing, spreadsheets, word processing, etc. – all these don’t work well with gestures on a large touchscreen on the desk in front of you.

The takeaway lesson David shared was: Terminology should be for clarity and to serve the reader.”

David ended his keynote to rousing applause as he regaled us to his very own version of the show tune “I Feel Pretty”, “Im On Twitter”.

Advertisement

MadCap roadshow in Long Beach

MadCap kicked of the 2011 season of roadshows on March 13 in Long Beach, CA, to coincide with the workshop day of the WritersUA conference.

A full-day program offered primers on topic-based authoring and single sourcing, best (and less recommended) practices of collaborative authoring, and a passionate introduction to Cascading Style Sheets (CSS). Shorter breakout sessions (which don’t seem to be part of the other roadshow dates) presented tips and tricks for Flare: How to handle tables, create print output and localize, as well as a case study about moving from RoboHelp to Flare.

Mike Hamilton at the MadCap roadshow

Mike Hamilton reminds us to think topics!

More like a conference

Several aspects made the event feel more like a Flare-centric conference than the marketing or sales event it was as well:

  • Presentations address tech writing challenges in general, whether or not you use Flare:
    • How to optimize topics for reuse
    • How to efficiently publish to several media from one source
    • How to collaborate with other authors.
    • And, oh yes, how you can do these things with Flare. But the general emphasis was on: “Here’s how you can work efficiently.” In fact, the helpful introduction to CSS didn’t rely on Flare at all. The more sales-y “Wanna see what the new version can do?” came only in a longish rock video with MC Mike during the drinks reception.
  • Networking opportunities galore during breakfast, lunch, the concluding reception and to some extent even during the breakout sessions.
  • A day’s worth of Q&A. MadCap brought more than a dozen people from all teams to be able to answer any question that we present and future users might throw at them. Their answers were usually constructive and frank, though my question about the number of employees got the PR-tinted answer “fewer than 100”. When asked whether Flare can do X, Mike sometimes says: “Not at the moment.” Previously, I thought this was just supposed to sound better than “No.” But after following Flare’s development for a few versions, I now see that many of such features have been added, such as the long-awaited formula capabilities in version 7.

Taking tech comm seriously

In a nutshell, MadCap continues to take tech writers, their requests and issues seriously. Here are some examples of insights from the sessions to illustrate how they go beyond the bells and whistles of selling a new version:

  • Customer focus is important – but documentation must also recognize and satisfy the owner (usually, the company that pays for the documentation) and the manager (who needs to ensure that documentation is completed in time and in budget).
  • Create separate target definitions for separate deliverables. Don’t rely on manual steps in the production process which might get lost in the hustle and bustle. For example, don’t trust that you’ll remember to update the global variable with the correct company name…
  • When you share source files over a network, even in a small team, use either a dedicated source control system or SharePoint (which includes source control) which avoids two people editing the same file at once and allows to lock down central resources like stylesheets.
  • There is no one font size that’s always appropriate. How large a font appears depends a lot on the “x-height” (roughly the height of the letter x), and that can vary in fonts of the same size.

Wish list

Taking my cue from other conferences I’ve attended, I think MadCap could add two things to improve the roadshow:

  • Book table. There are a few books available about MadCap products. For those of us who like books with their tools, why not have at least a sample copy by the registration table, so we can check them out whether we want to buy them or not?
  • Rant & rave session. This one might not be in MadCap’s best interest, but hey, they’re generally a pretty accessible company who like to collaborate with their customers and prospects. I think they should put on a session where attendees can get one minute each to rant and/or rave about the products. Such a forum would give MadCap a quick way to see what bothers a lot of users – and what they like. Two peeves came up:
    • The previously free reviewer module “X-Edit” has been replaced by “Contributor” which requires a license per user. This is not feasible in environments where each writer easily has one or two dozen potential reviewers. MadCap needs to come up with a better licensing model for this.
    • Flare lets you define separate CSS Mediums for print and online in great detail. A field near the top indicates which one you’re editing. Yet a lot of users still manage to edit the “wrong” one when they get all engrossed in styles. Simply highlighting this better would improve usability for focused, single-minded users.

If you’ve attended a MadCap roadshow or other such industry event or are considering to attend, feel free to share impressions or questions in the comments.

Top 5 reasons to attend a tech comm conference

The benefits of attending a tech writing conference go way beyond learning about methods and tools. That’s why I really look forward to Writers UA next week!

Most reasons are kinda obvious really. But put them all together, and they create a serious pre-conference buzz, almost like when you follow a band on tour or attend an intense music or theatre workshop. If you’ve never been to a conference, I can assure you all the benefits will make it worth your while. And if you know what it’s like, I invite you to add your own reasons in the comments.

(tekom 2010, photo by jophan).

1. Learn about methods and trends

This is the “official”, token reason: Of course, you’ll learn about tech writing methods, tech comm trends and case studies. Look at the conference web site which gives you an idea what to expect. (Here’s my earlier list of links to conferences this year.)

2. Check out new tools and versions

Many conferences double as a trade fair, so you can get a guided tour and a hands-on impression of new tools without installing trial versions and wondering “now what?”.

3. Meet experts (a/k/a make friends)

I’m always amazed at the combined expertise at conferences. And I don’t just mean the speakers. Go down to the hotel registration desk, and you may meet someone whose tweets you’re following. Sit down at the bar, and you may chat with someone who’s been using the content strategy you’re pondering. The chance acquaintance at the dinner table may have been using the tool you’re considering.

4. Connect with the hive mind

Often you can come with a specific question in mind and find ways to answer it. TCUK10 had a rant session that also gave people the opportunity to solicit answers from attendees. WritersUA has a more formalized Q&A opening session: “Let’s Look in the Mirror and See What We See“. Where else can you get instant, free consultation from dozens of experiences tech writers at once?

5. Visit with friends

If it’s your first conference, you’ll enjoy to get to know people better who you’ve just met. Then you can look forward to meeting people again you haven’t seen in a while, but you’ve seen their tweets, blog posts or articles.

Bottom line: Soak up inspiration and motivation

It all boils down to this: A conference can give you inspiration, motivation and confidence that you’re not alone, that you’re doing something professional and totally worthwhile! If that isn’t worth your time (and maybe even some of your money…) 🙂

Practical tip: Share costs and benefits

It’s pretty obvious that your company shares in most of the benefits. So it’s in their interest as well that you attend a conference. If your boss has more understanding than budget, consider if you could split the cost:

  • Maybe you can pay (and write off) travel costs?
  • Maybe you don’t have to take days off to attend?

[Update 8 March: Bill Albing answers that once-a-year conferences are sooo yesterday in the age of social media. Make sure you also read his Top 5 Reasons to Avoid a Tech Comm Conference.]

Your turn

Do these benefits work for you? What other benefits can you think of? If you’re freelancing, can you land new contracts at a conference? Please leave a comment.

How you can exploit the “Big Disconnect”

By way of consumers, web 2.0 and social media present a disruptive influence on corporate IT: Existing “systems of records” face challenges by new “systems of engagement”.

The thesis is by Geoffrey Moore in an AIIM white paper and presentation, and I’ve come across it by following some links in Sarah O’Keefe’s post “The technical communicator’s survival guide for 2011“. So once again, I find a great idea, summarize and apply it, instead of thinking up something wildly original myself. (Maybe not the worst skill for a tech writer, come to think of it… 🙂 )

Out-of-sync IT developments

Moore’s premise builds on out-of-sync advances of corporate vs. consumer IT:

  • Corporate IT developments recently focused on optimizing and consolidating otherwise mature, database-based “systems of record” which execute all kinds of transactions for finance, enterprise resource planning, customer relationship management, supply chain, etc.
  • Consumer IT, on the other hand, saw the snowballing improvements in access, bandwidth and mobile devices which have quickly pervaded ever more spheres of everyday culture.

“The Big Disconnect”

This imbalance leads to the pivotal insight of Moore’s analysis: As I read it, the disruptive influence on corporate IT occurs not through technologies or processes, but through people.

People are quick to adopt or reject or abandon new consumer IT tools and habits that cater to their needs. The same people feel hampered by corporate systems and workflows that seem unsuitable and obsolete. Moore calls it “The Big Disconnect”:

How can it be that
I am so powerful as a consumer
and so lame as an employee?

How consumer IT affects corporate IT

For the next 10 years, Moore expects that interactive, collaborative “systems of engagement” will influence and complement, though not necessarily replace traditional “systems of record”:

  • Old systems are data-centric, while new systems focus on users.
  • Old systems have data security figured out, new systems make privacy of user data a key concern.
  • Old systems ensure efficiency, new systems provide effectiveness in relationships.

For a full comparison of the two kinds of systems, see Moore’s presentation “A ‘Future History’ of Content Management“, esp. slides 10-12 and 16.

But does it hold water?

Moore’s analysis has convinced me. I used to think that corporate and consumer IT markets differ because requirements and purchase decisions are made differently. But this cannot do away with the “Big Disconnect” which I’ve seen time and again in myself and in colleagues. Personally, I know that this frustration is real and tangible.

Also, the development of wikis and their corporate adoption is a good case study of the principle that Moore describes. If you know of other examples, please leave a comment.

What does it mean to tech comm?

The “Big Disconnect” affects those of us in technical communications in corporate IT in several ways.

Tech writers write for disconnected corporate consumers. So we do well to integrate some of the features of “systems of engagement” that Moore describes:

  • Add useful tips & tricks to reference facts.
  • Provide discussion forums to complement authoritative documentation.
  • Ensure quick and easy access to accurate and complete documentation.

But technical communications can do one better by helping to ease the drawbacks of engaging systems:

  • Offer easy, comprehensive searches through disparate formats and sources.
  • Moderate forums and user-generated contents carefully to maintain high content standards and usability.

Tech writers are disconnected corporate consumers. So we can push for the improvement of the products and processes we describe or use.

  • On consumers’ behalf, we can advocate for improved usability and for documentation that is more efficient to use.
  • On our own behalf, we can insist to improve workflows that serve a system rather than us writers and our processes.
  • We can urge to replace help authoring systems that support only fragments of our documentation workflows with more efficient tools.

Our managers are also disconnected, most likely. So when we argue for any of the above disruptions, we can probably fall back on their experience when we have to justify them. We’ll still need good metrics and ROI calculations, though… 🙂

To read further…

The “Big Disconnect” and its effects connects nicely with a couple of related ideas:

Your turn

Does the Big Disconnect make sense to you – or is it just the mundane in clever packaging? Do you think it’s relevant for technical communications? How else can we tech writers exploit it? Please leave a comment.

Learn about DITA in a couple of hours

DITA 101, second edition, by Ann Rockley and others is one of the best tool-independent books about DITA. It’s a good primer to learn about DITA in a couple of hours.

Strong context

The book excels in firmly embedding DITA’s technologies and workflows in the larger contect of structured writing and topic-based authoring.

DITA 101, 2nd edition, cover I attribute this to the authors’ years of solid experience in these areas which comes through, especially in the earlier chapters.

“The value of structure in content,” the second chapter, illustrates structured writing with the obvious example of cooking recipes. Then it goes on to show you how to deduce a common structure from three realistically different recipes – which I hadn’t seen done in such a clear and concise way.

“Reuse: Today’s best practice,” the third chapter, takes a high-level perspective. First it acknowledges organizational habits and beliefs that often prevent reuse. Then it presents good business reasons and ROI measures that show why reuse makes sense.

Comprehensive, solid coverage

From the fourth chapter on, Rockley and her co-authors describe DITA and its elements very well from various angles:

  • “Topics and maps – the basic building blocks of DITA” expands on the DITA specification with clear comments and helpful examples.
  • “A day in the life of a DITA author” is very valuable for writers who are part of a DITA project. Writing DITA topics and maps is fundamentally different from writing manuals, and this chapter highlights the essential changes in the authoring workflow.
  • “Planning for DITA” outlines the elements and roles in a DITA implementation project for the project manager. Don’t let the rather brief discussion fool you: Without analyzing content and reuse opportunities, without a content strategy and without covering all the project roles, you expose your DITA project to unnecessary risk.
  • “Calculating ROI for your DITA project” has been added for the second edition. It’s by co-author Mark Lewis, based on his earlier white papers: “DITA Metrics: Cost Metrics” and “DITA Metrics: Similarities and Savings for Conrefs and Translation“. It expands on the ROI discussion of chapter 3 and creates minor inconsistencies that weren’t eliminated in the editing process.
  • “Metadata” first introduces the topic and its benefits in general and at length. Then it describes the types and usefulness of metadata in DITA. This might seem a little pedestrian, but it’s actually helpful for more conventional writers and for managers. It ensures they fully understand this part of DITA which drives much of its efficiencies and workflows.
  • “DITA and technology” explains elements and features to consider when you select a DITA tool, content management system or publishing system. This always tricky to do in a book as much depends on your processes, organization and budget. While the chapter cannot substitute for good consulting, it manages to point out what you get yourself into and what to look out for.
  • “The advanced stuff” and “What’s new in DITA 1.2” continue the helpful elucidation of the DITA specification with comments and examples that was begun in the “Topics and maps” chapter.

Mediocre organization

For all its useful contents, the book deserves better, clearer organization!

  • Redundancies and minor inconsistencies occur as concepts are defined and discussed in several places. For example, topics are defined on pages 4, 24 and 46. The newly added ROI chapter complements the ROI points in the third chapter, but neither has cross-references to the other.
  • The index doesn’t always help you to connect all the occurrences and navigate the text.
  • Chapters are not numbered, yet numbering of figures in each chapter starts at 1. It’s not a big problem, because references to figures always refer to the “nearest” number, it’s just irritating.

Formal errors

The book contains several errors which add to the impression of poor production values. They don’t hurt the overall message or comprehensibility, but they are annyoing anyway:

  • Mixed up illustrations such as the properties box in Word (page 72) vs. the properties box from the File Manager (73)
  • Spelling errors such as “somtimes” (1) and “execeptions” (16)
  • Problems with articles such as “a author” (20) and or a system that “has ability to read this metadata” (77)
  • Common language mistakes such “its” instead of “it’s” (52)

Lack of competition

Another reason why it’s still one of the best books on the topic is that there simply aren’t many others!

  • Practical DITA by Julio Vazquez is the only serious contender, and its practical, in-the-trenches advice complements Rockley’s book very well.
  • [More books are pointed out in the comments, thanks everybody! – Added January 11, 2010.]
  • DITA Open Toolkit by “editors” Lambert M. Surhone, Mariam T. Tennoe, Susan F. Henssonow is a compilation of Wikipedia articles. Amazon reviewers call other titles produced by the same editing team and publisher a scam.

Of course, several other honorable and worthwhile books include articles or chapters on DITA and/or discuss DITA in context of specific tools.

My recommendation

Despite its shortcomings, the book’s own claim is valid: “If you’re in the process of implementing DITA, expect to do so in the future, or just want to learn more about it without having to wade through technical specifications, this is the book for you.”

I recommend that you read it if you are

  • Involved in a project to implement DITA
  • Writing or translating documentation in a DITA environment
  • Managing technical writers

Your turn

Have you read this book? What’s your opinion? Can you recommend other books or resources about DITA? Feel free to leave a comment!

Shape the hype cycle with tech comm

You can use technical communication to accompany and even nudge technologies and products along the hype cycle.

The hype cycle

The hype cycle was invented by Gartner Research in 1995 and has since been underlying dozens of their reports. Here’s a schematic example:

image from http://newsletter.stc-carolina.org/

You can see how it tracks visibility and expectations of a technology across time in 5 stages, from the Technology Trigger up to the Peak of Inflated Expectations and down into the Trough of Disillusionment, then slowly up the Slope of Enlightenment, until it reaches the final Plateau of Productivity.

I think there a few remarkable things about the hype cycle:

  • Let’s get the obvious out of the way: It’s not a cycle at all, but a curve… 🙂
  • Different types of companies may engage in the cycle in different stages. That means the hype cycle is not some fate to be endured, but something that can shape corporate strategy – and by extension content strategy.
  • The hype cycle is not just for managers and marketeers. It speaks to our industries as well: Tech comm consultant Sarah O’Keefe started an article on “The Hidden Cost of DITA” with it in 2008 (that’s where I got the example above). And UX designer Ron George put one up on his blog last year.

Enter technical communication

So what does technical communications have to do with it?

We technical communicators provide the words around stuff on the curve. So we can put a spin on them, to a certain extent. I don’t think we can move a technology or a product into a totally different stage with documentation. But I believe we can mitigate adverse effects and nudge our subject along the curve a little.

There are two reasons why this works:

  • Technical communication is part of the hype cycle. Whether we take it into account or not, our documentation contributes to the item’s visibility, and it certainly shapes expectations on it.
  • Technical communication can be dynamic and agile. It is usually quite easy and fast to change the technical communication in contents, tone and spin to address a new use case, an additional persona or a different audience.

And there are several ways how you can use technical communication to influence the hype cycle:

  • It’s all about context. You know this already, if you’ve ever thought about personas, your audience and their situation when they are using your product. So take into account the hype cycle, especially that difficult phase into and out of the Trough of Disillusionment. “First contact” documentation such as quick starts are particularly suited to address inflated expectations and to offer a shortcut to the Plateau of Productivity.
  • Position yourself as the users’ advocate who accompanies them along the curve. Who is better suited to guide them up the Slope of Enlightenment than us technical communicators? Keep visibility of the product and its benefits up (to the limited extent that you can), and keep users’ expectations realistic.
  • Engage with the users. Hiking up a slope in silence is no fun. Find out what interests your users, what they try to do and where they want to go with the product, whether by soliciting feedback or user-generated contents. (But don’t forget to check back with your diligent product manager about the general direction…)

Your turn

What do you think? Should you, can you write with the hype cycle in mind? How can it affect the relationship between technical communications and marketing?

Top strategies to embrace cost metrics

Moving to a structured writing environment can change the metrics of documentation. That’s one of the lessons I learned in a great webinar by Scriptorium‘s Sarah O’Keefe about  “Managing in an XML environment”.

If you’ve missed it, check out the 45-minute recording/slideshow on their website. You’ll find it very interesting, if you’re wondering what it will be like to create and maintain documentation, once you have implemented XML. I’ll summarize a few aspects that I found interesting and comment on them.

XML increases transparency

Creating documentation in an XML environment increases the transparency in writing documentation, for better or worse. Tech writers’ work in XML is more visible earlier in the process: Without XML, a writer may deliver a print-ready PDF after months. With XML, she might check in topics every day for a nightly build.

Just as content gets chunked from books to topics, so progress gets chunked from weeks and months to hours and days. What can be measured often will get measured, so Sarah warns: Beware of seductive metrics. Measuring pages per day, for example, is silly: It will increase page count, but not necessarily the quality or the effectiveness of the documentation.

Strategy #1: Learn to QUACK

Measure something useful instead. Sarah suggests the QUACK quotient:

(quality + usability + accuracy + completeness + “konciseness”) / cost

Sarah goes on to define each of the five “QUACK” factors in similar terms as Gretchen Hargis, et al. in Developing Quality Technical Information. For example, quality considers whether the documentation is well-written and well-structured. “Konciseness” (spelling follows acronym as form follows function) means to provide as little documentation as is necessary, but no less. This improves efficiency for users and localizers alike.

I think this approach is great for scenarios where you can’t get out of cost metrics. Using accepted quality criteria is definitely better than being held to junky metrics.

But I wonder how quantifiable the five dividends actually are: How accurate is a topic? “Very accurate”, if I’m lucky – but I wouldn’t know how to put a number on that… Also, each dividend should be weighted according to audience and industry, Sarah explains. For example, completeness of documentation is more important in regulated industries than video games. That doesn’t make the quantification any easier or less contested.

Strategy #2: Duck the cost

My own strategy requires even more leverage for tech writers than just pushing a new formula through to assess our work. So it probably doesn’t work for all tech writers.

The QUACK quotient takes for granted that documentation is a cost center. Of course, many managers share that view. But I wonder if we tech writers wouldn’t be better off, if we got out of that defensive corner altogether.

I think it helps us more in the long run to show how documentation contributes to the larger corporate processes of production and value added. So I suggest that it’s worth to argue along these lines:

  • Turn transparency into cost attribution: Show how each topic’s cost can be counted towards the development cost of the feature or part that it describes, just like other stages in the production processes. It’s like applying total cost of ownership to your own products.
  • Turn topic reuse into corporate efficiency and assets: Show how reused topics create extra value or reduce costs in other departments, for example, in training, customer services or marketing.
  • Measure relative cost savings: Show how writing XML-based documentation is more efficient than the previous non-XML process, once you’ve overcome the initial hurdles.

Bonus link: Cost metrics white paper

If you’re into DITA or want to see how cost metrics for structured writing break down to actual numbers, check out Mark Lewis’ clear and thorough white paper “DITA Metrics: Cost Metrics“:

You’ve already concluded that moving to DITA will save you tons of time and money. But management says prove it. This paper helps you determine the cost portion of the ROI calculation. What are my costs now? What will my new costs be with DITA? And what is the difference—my savings?

Your turn

What do you think is the best way to justify tech writing cost? What scenarios or strategies have you seen succeed or fail? Share your thoughts in the comments.

DocTrain West 2009: Mark Lewis on “DITA Metrics: Cost Metrics”

Mark Lewis‘s session presented metrics with which you can show ROI for DITA by reusing identical and similar topics. See Lewis’ White Paper here on the Content Wrangler for details.

An engaged discussion ensued which shows the interest in tangible metrics:

  • There was no immediate answer about the total cost of ownership for converting to and maintaining DITA. But the burden of proof in terms of TCO shouldn’t rest with the documentation team – which most likely doesn’t have all the numbers for it, anyway. Instead, the team can try to figure what management is willing to spend money for and try to tap into that.
  • If you have a diverse team where some writers are faster than others, you can either use a mean topic development time (instead of the average; see the comment wall of the DITA Metrics group in the Content Wrangler ning network). Or you can use normalized sizes and assign each writer a factor to define how long he needs to complete one of those development sizes.