Tech comm MOOC by STC not happening apparently

The STC’s MOOC which I announced last month is apparently not happening. It was supposed to let participants explore the field of technical communications, and I was scheduled to teach the introductory module starting 30 September.

My assumption that the MOOC is not happening is based on the facts I have at this point:

  • There has been no announcement by the STC beyond an introductory one-page article in the July/August issue of the STC’s intercom magazine.
  • There is no place to sign up for the MOOC and to get specific information.
  • There is no content in the MOOC’s staging area.

I’m really sorry that it’s not happening.

I think we’ve lost a great opportunity to let people know what a varied and vibrant profession technical communication is. That we are mainly about improving relationships with customers and users where people meet products and services – not about spelling and serial commas.

At a time when professions and job requirements develop rapidly around us, it is important to prove that technical communicators do add value and play important roles in defining and implementing content strategies and user experiences – and this MOOC would have been a great chance to do just that.

Join us for a tech comm intro MOOC by STC

This fall, the STC will run a free 5-week MOOC to allow everybody online to explore the field of technical communications – and I’m excited to be teaching the introductory module!

The full syllabus

The MOOC will highlight the roles and responsibilities of technical communication professionals through 5 specializations in 5 weekly modules, starting on 30 September:

  1. Introduction to technical communication, by myself
  2. Content development and delivery, by Bernard Aschwanden
  3. Content strategy and lifecycle, by Mollye Barrett
  4. Instructional design, by Dana West and Phylise Banner
  5. Usability and user assistance, by Ray Gallon and David Farbey

The introductory module

My module in the first week will serve as a general introduction to the field of technical communication.

What will you learn?

After the module, you should be able to

  • Define purpose, benefits and tasks of technical communication
  • Argue the value of technical communication for companies and clients
  • Describe the daily job and career of a technical communicator
  • Identify the elements of effective technical communication
  • Describe and develop basic core skills of a technical communicator

What do you do?

Your week will start with some assigned texts and videos to introduce the topics.

You will see how all the pieces fit together in an online lesson on Wednesday afternoon (US time). The outline will be the same as for the readings; it looks something like this:

  1. What is technical communication? – Definitions and trends
    • A changing definition, from technical writing to business problem-solving
    • Recent trends (mobile and embedded help, social media and user-generated content)
  2. Why have technical communication? – Benefits and business cases
    • How technical communication benefits users and companies and products
    • What only technical communication can do (USPs)
  3. Who is a technical communicator? – Tasks and career
    • A day in the life
    • Personality and aptitudes
    • A versatile career path
  4. How does a technical communicator work? – Skills and expertise
    • Know your audience through audience analysis and personas
    • Learn from subject-matter experts by research and collaboration
    • Write task-oriented topics using task analysis and modular topic types
    • Edit modular documentation for content and language

You will have a chance to try your hand on technical communications in a couple of learning activities (a/k/a assignments) around creating and editing documentation.

Oof, that’s a lot, no?

Well, yes and no. Yes, it is a wide area, but the purpose is to give you a taste of our versatile profession! I’ll start with the larger picture to illustrate the value of tech comm and how it can be cool and profitable, before diving into a few core skills in depth. The four later modules can afford to be a little more focused.

More information

More information will be available shortly on the web sites of STC which is sponsoring this MOOC and on CourseSites which is furnishing the platform for it.

In the meantime, check out Mollye Barrett talking about the MOOC and her module in a 1:30 video.

What do you think?

Would this be interesting to participate in? What other topics would you expect to see covered in the intro module? Let me know in the comments, and I’ll see if I can address your opints, either in the comments or in the MOOC module itself!

Obvious structure in tech comm benefits all users

Informational text that exposes its “structural information, such as hierarchical relations” gets high reading comprehension scores, whether readers have prior subject knowledge or not. This is the result of a study reported in Learning Solutions Magazine by Chris Atherton. And it’s good news for technical communication because it means structured writing and topic-based authoring done well benefit novice and expert users alike.

The study

The study presented a 5,000-word article in three formats to two different groups. The three formats were:

  • A linear document of paragraphs
  • A hierarchical set of linked topics which was basically web site six levels deep
  • A mixed format which combined linear text presentation with links to related topics that didn’t expose structure or hierarchy

The two groups of audience were:

  • Novices without prior knowledge of the subject
  • Experts who had formal training in the subject

The results

It’s best to scroll down to the results graph over at the magazine website, but in case that disappears, here’s a summary of the different reading comprehension scores:

  • Novices understood the hierarchical format best, closely followed by the mixed format, with the linear format a distant third.
  • Experts understood the linear format best, closely followed by the hierarchical format, with the mixed format a distant third.

So exposing the hierarchy and structure of the text benefits novices and experts alike. If you’re writing for experts only, presenting linear text gives them a slight advantage, but “shuts out” novices.

The implications for tech comm

  • Structure authoring helps your users understand and remember. Novice and expert users alike can make sense of the information not only from the individual bits and pieces, but also from the structure how everything hangs together. For example, consider relating concepts and sub-concepts to on another. Or when instructing users to do tasks, consider giving an overview of the big picture process first. Then break down the process clearly into distinct procedures and further into individual steps. For many readers, easy access to structure also helps them to retain information better, regardless how they manage to memorize it.
  • Structured authoring helps you to create complete documentation efficiently. You can organize and maintain your information more efficiently with structure and hierarchy. Structure makes it easier to ensure that each piece of information has a distinct place, so you can avoid redundancies. Hierarchies make it clear where your concepts and procedures are complete and where you still have gaps. It’s easier to note a missing topic or sub-chapter than a missing paragraph somewhere in linear text.
  • Limited advantage of linear text. The study showed that linear text in paragraphs is most comprehensible for expert readers. But I think the advantage of this format is in general limited:
    • For novices, linear text is a distant third, so relying on “linear” requires that you have a homogenously expert audience.
    • For you as a writer, linear text possibly takes more time or effort to maintain, depending on how much text you maintain and how often you update.
    • For other writers who need to edit or update your documentation, linear text is probably harder than topics that expose the internal structure of the subject matter.

By the way: Chris Atherton and I will lead a workshop together at TCUK13 in Bristol on 24 September. So if you’re in the area and want to “Bake your own taxonomy”, consider joining us. 🙂

How our addiction to meaning benefits tech comm

Join me for my presentation “Addicted to Meaning: How Good Technical Communication is Like Bad Magic Tricks” at tekom/tcworld in Wiesbaden on Tue 23 Oct at 1:45 pm.

In the session, I will explore how “meaning” works in technical communication, why it sometimes fails and how you can improve its chance for success. Being meaningful in your work is harder to measure than being correct, concise or consistent. However, it is just as essential: Understanding how and why communication is meaningful to your readers can help you to make your documentation more effective and to distinguish good from bad.

Using examples from topic-based authoring and minimalism, I will illustrate the underlying working of semiotics and mental models to show:

  • Why minimalism works, but FAQ’s don’t
  • Why asking a friend is effective, even when he doesn’t know the answer
  • How readers create meaning from documentation
  • … and how good documentation is like bad magic tricks 🙂

I will put our familiar tech comm tool box into a new context, so you can get a deeper understanding and a fresh perspective on tech comm and how it fits into the bigger picture of meaningful communication.

I’ve set up the topic in two earlier posts which give you an idea how I’ll tackle the issue:

Improve tech comm by knowing a foreign language

Knowing a second language can help your tech comm work in a couple of ways. The benefit is probably not great enough in itself to justify learning a language, but if you have or had other reasons, it’s worth to consider these side benefits.

Making decisions in a foreign language

I got to think about this when I read a story in Wired that “thinking in a second language reduced deep-seated, misleading biases”. Psychologists at the University of Chicago conducted a study (abstract, full text in PDF) that asked “Would you make the same decisions in a foreign language as you would in your native tongue?”

In a foreign language, we use the same experiences and processes to evaluate situations and estimate risks. However, “a foreign language is like a distancing mechanism. It’s almost like you’re a slightly different person,” says Boaz Keysar who led the study (Business Week). According to the study, thinking in a non-native language emphasizes the systematic, analytical reasoning process. Thinking in our native tongue, on the other hand, leaves more room for the complementary intuitive, emotional decision process: “The researchers believe a second language provides a useful cognitive distance from automatic processes, promoting analytical thought and reducing unthinking, emotional reaction” (Wired). (Whether an analytical process yields “better” decisions is an entirely different story…)

Making tech comm better with a foreign language

For the past 12 years that I’ve worked full-time as a tech writer, I’ve written almost exclusively in my second language English, though I did occasionally translate my English writing into my native German. The study’s conclusion that a second language provides a “useful cognitive distance … promoting analytical thought” explains what I’ve experienced in my work in either language, beyond the limits of actual study:

1. A second chance to learn how language works. Many writers I’ve talked to have a solid grasp of their native tongue, but cannot necessarily explain the rules why something is right or wrong. When you learn a second language consciously, you also learn about grammar (again), its powers and limitations. And you can understand how something what works in one language can be similar or even different in another. For me, writing in English certainly made me a more conscientious “grammarian” in either language.

2. Mirroring the “distance” of users. In my experience, the distance that a second language brings is basically pragmatic incompetence: In a foreign language, I’m not as fully aware of the social context, of how time, space and inferred intent contribute to any communication act. I may trip over an idiom I don’t understand, or I may fail to see the irony of a statement and take it at face value.

In tech comm, this linguistic challenge is actually a benefit, because many of my readers share in that distancing experience. My readers may read my documentation in their second language. Or they might use the product in a context and for a purpose that is more or less different than intended and documented. This is why localization is harder than just translation. Internationalization can even become an accessibility issue, when a product no longer works properly in a certain context. So facing similar pragmatic uncertainties makes me a better advocate of the users I write for.

Your turn

If you know a second language, do you find it helps your writing? Do you have other reasons or benefits beside the ones I listed?

A. Ames & A. Riley on info experience models at STC12

Andrea Ames and Alyson Riley, both from IBM, presented a dense whirlwind tour on “Modelling Information Experiences” that combine four related models into a heavy-duty, corporate information architecture (IA).

While the proceedings don’t include a paper on this session, Andrea posted the slides, and the presenters published a related article (login required) “Helping Us Think: The Role of Abstract, Conceptual Models in Strategic Information Architecture” in the January 2012 issue of the STC’s intercom journal.

The session proceeded in six parts. First, Alison explained IA models in general and how they work. Then Andrea described each of the four model types that make up an IA specifically.

IA models as science and art

Information architecture relates to science as its models draw on insights and theories of cognition. And its models relate to art as they aim to create a meaningful experience. Both aspects are important. Only if IA models manage to blend science and art can they touch the head and the heart.

The session focuses on IA models instead of theories (which are too vague and abstract) or implementations (which are too specific and limited). Thanks to the in-between position of IA models, we can use them to

  • Ask the right questions to arrive at a suitable, feasible IA
  • Tolerate the ambiguities of “real life”

Models present descriptive patterns, not prescriptive rules. They don’t say how stuff must be, but how it can be represented. They differ from real life, but real life is still recognizable in them.

That means you cannot simply implement a model on autopilot and get it right. Instead, you have to

  • Think how to implement the model
  • Vary the model for your users’ benefit
  • Listen to the right knowledgeable people when implementing

Models help you think

To arrive at your concrete IA, you take the model’s abstract patterns and apply your project-specific details to them, the who, what, where and when.

This task is less mechanical and less copy-and-paste than it sounds. It’s not so much a question of following rules and recipes, but of making abstract patterns come to life in a coherent, flexible whole. (If you’ve ever tried to create meaningful concept or task topics by following an information model, you know it’s more than just filling in a DITA template. You need to use your own judgment about what goes where to achieve and maximize user benefit.)

Since there are four related models, you need to think carefully how each of these models should benefit your users. And the models help you think, they scale and adapt to:

  • How your business and its information requirements develop over time, how they grow and expand in desired directions
  • How your customers find, understand and apply the information they need

The four IA model types

The IA model that Andrea then explained consists of four related model types:

use model (content model + access model = information model)

Each of these model types needs to be developed and validated separately.

The use model defines how users interact with information. It describes standard scenarios for optimal user experience within the product or system context. It outlines what information users need and why and how they use it. It includes use scenarios for the entire product life cycle and user personas that outline different types of users. Fortunately for us technical communicators, Andrea pointed out, describing all this is part of our core skill set.

The content model defines how information is structured effectively. This could be DITA (in the case of the company I work for, this is what we call our DITA-derived “information model”). It includes the granular information units and their internal structure, for example, task topics and their standard sequence of contained information. It also describes how these granular units are combined into deliverables.

The access model defines how users access the information efficiently. It makes provided information useable, thanks to a navigation tree, a search function, a filtering function to increase the relevance of search results, an index, etc. Artefacts of this model type are wireframes, storyboards, a navigation tree and the like.

The information model defines how users get from one stage to the next. It uses the other three model types as input and fills in the gaps. It combines the content and access models which probably work fine during the installation and configuration processes, but may not yet carry a user persona from one stage to the next. As such, the information model is sort of the auxiliary topic that you sometimes need to insert between concept, task and reference topics to make a complete book out of them. At the same time, this model type is more detailed than the use model and closer to the other two types.

My takeaways

I was extremely grateful for this session and rank it among the top 3 I’ve seen at the summit – or any tech comm conference I’ve been to! Yes, it was fairly abstract and ran too long – my only complaint is that it left only 2 minutes for discussion at the end.

As abstract as much of the session was, it actually solved a couple of problems I couldn’t quite put into words. After designing and teaching our company’s DITA-derived information model (which is a “content model” by this session’s names), I thought our model was applicable and useful, but it had two problems:

  • Our model lacked context. – Now I know that’s because we haven’t mapped out our use model in the same detail and failed to connect the two.
  • Our model was baked into a template for less experienced writers to fill in – with varying results. – Now I know that’s because the models are not supposed to provide templates, but require writers to use their own judgment and keep in mind the big picture to deliver effective information.

Another lesson I learned is that many structured information paradigms seem to include a less rigid element that sweeps up much of the miscellaneous remnants. In DITA, there’s the general topic which is used for “under-defined” auxiliary topics to give structure, especially to print deliverables such as manuals. In this IA model, there’s the information model which fills the gaps and ensures a more seamless user experience than the other three models can ensure.

– For an alternative take, see Sarah Maddox’ post about this session.

What I learned from my pattern recognition talk at STC12

My session on pattern recognition for technical communicators was a very rewarding experience which taught me a lot. I thank Paul Mueller, Conference Manager, and Alyssa Fox, Program Committee Chair, for inviting me to speak, even though this was my first summit. Their friendly, indefatigable support set the tone for a high-energy, well-run conference.

If you want to revisit the session, here are the slides and the article from the proceedings. (If you haven’t attended the session, the article will probably be more helpful, for reasons explained below.)

Here’s a look behind the scenes of my talk and what I learned:

Two different roles

Attending a conference is not the same as speaking at one. Well, duh…

But what I mean is this: I took care to be an observant attendee before my own session, so I could gauge expectations and behaviors of attendees. Bluntly put: As attendee, I want my money’s worth. As speaker, I need to give my audience their money’s worth. Observing and knowing the first helped me achieve the second – or so I hope.

When I was still in academia, I was often put off by conference presenters whose ignorance of the audience’s interests and demands could be quite arrogant – and usually didn’t help the conference as a whole, either. So I wanted to avoid that.

Plus, one of the mantras of technical communicators is: “Know your audience!” How could I afford to ignore it at a tech comm conference?

“Unusual” works

I was unsure about my topic, because it was a bit unusual and off-the-wall: Tying the psychology of perception to technical communications – only to confirm what we do anyway, such as topic-based authoring and parallelism?

Me presenting on pattern recognition at STC12

(Photo courtesy of Jamie Gillenwater)

On the other hand, I know from attending previous conferences, how much I enjoyed and benefitted from such sessions. A-ha moments are fun and enlightening, they work in TV science programs, so maybe they’ll work at a conference as well…

During the session, I was too busy to count heads, but I’m guessing I might have had an audience of 70 people maybe. There were other sessions to choose from. Or breakfast, since I was in the 8:30 slot. So I decided early on to get over my worries and trust in the general curiosity of tech writers. 🙂

Rehearsing pays

So practicing helps… Again: Well, duh…

Specifically, it allowed me to move beyond bullet points. I’ve seen many a session (less so at the summit) where presenters mainly read their slides. To me, those are usually not the best presentations. I don’t need great showmanship, but reading the slides seems as if the speaker serves the slides rather than vice versa.

I’ve tried to make at least the a-ha moments less reliant on words and bullet lists and more like an illustrated story. And I’ve found that decent images will remind me of the story just fine. (The last section about actually applying pattern recognition in tech comm has more bullet lists, so people could take notes.)

In addition, I found that rehearsing also helps me to “know time” (a pet obsession of mine; I even have a blog post about it). I’ve seen excellent presentations, but it peeves me a bit if they take up 58 of 60 minutes. I also learn by asking questions and by engaging with the speaker or others in the audience. And to me, it seems a bit careless to mar an excellent session by running overtime.

Budget your energy

I am really glad (and almost a little proud) that we’ve had such a lively, engaging discussion after my presentation. People suggested additional sources, propped up some of my arguments and ran with others, bringing up evolution, Edward Tufte’s information graphics and – privately afterwards – even Immanuel Kant!

My one regret is that by that time I was a bit exhausted and didn’t always do justice when repeating the question or comment for everybody in the room and for the recording. Not sure how I can achieve it, but I want to save not only time, but also energy to facilitate the discussion afterwards.

But all in all I think the session went well, I’ve really enjoyed the experience and am glad to contibute to our curious, friendly and supportive tech comm community!

Art vs. online: 2 dimensions of curating

Curating is a cool word, or trendy jargon, for what happens in web technologies and in art museums, but they are fundamentally different activities.

In this post, I want to add an alternative view to Rachel Potts who recently wrote about “When software UX met museum curation“. Where Rachel emphasises similarities, I’d like to focus on the differences, especially as they relate to art museums.

Your artefacts

One serious limitation and difference in curating at art museums, compared to anything in software and online, is that you need to care for original, unique works. If you mount a special exhibition, you need to procure them to begin with. And sometimes you cannot get them, no matter how much you want them in the show to present an artist or an era in history or to make your case.

  • Some works don’t travel because they’re fragile or because the insurance is too costly or because they’re centerpieces in the collection that owns them.
  • Some owners won’t lend works to you, because you cannot satisfy security requirements or because you’re too small a museum or because they don’t like your director.
  • Some works are simply lost.

Of course, you can always do with fewer or lesser works or, in the case of historic artefacts, with copies, but that invariably hurts the critical response and your attendance.

"Stalking Christina" - other people regarding my favorite painting

Your objectives

Another difference is that for many art museums “enabling users to learn” is one objective among many. And several other objectives are, unfortunately, at odds with it:

  • Some pieces are too sensitive to light or touch or movement to allow more than very few people to experience them.
  • Some museums need to please or placate donors (who may influence what’s shown and what not) and trustees (who may influence what gets paid for and what not).
  • Some museums don’t have the means: They lack the manpower to accommodate visitors more than a few hours per week. Or they don’t have the expertise to allow them to learn well.

Your audience

A third difference is that art museums who put on ambitious, critically well-regarded exhibitions find that attendance is surprisingly low. The reason is simple and disappointing: Many people don’t want to be enabled to learn in art museums. They don’t want to learn new things, much less have their beliefs challenged. Instead, many people visit an art museum, because of the way it makes them feel:

  • Many go to be in the presence of beauty or to be awed. Hence the success of any show whose title mentions a best-selling artist or any of the words “Impressionist”, “Gold” or “Gods” – even if the title is far-fetched and the show mediocre at best. “Dinosaurs” gets kids, and anything that flies or shoots gets their dads.
  • Some go to feel cool. Hence the success of after-work parties in modern art museums.

The words

Roger Hart once told me, it’s futile to try to stop linguistic change. And the web is a great change agent of language:

  • How many kids today know that women warriors (or a river) gave their name to an online store?
  • The German language has known about “email” for centuries (though we only spell it thus after a recent change in orthography); in English, it’s known as “enamel”.

But if language is to represent the real world, I advocate to respect the differences within one word, such as curating. Conflating two similar activities into the same word cheapens our experience of the stuff that surrounds us.

Proving the benefit of consistency in tech comm

To ensure efficiency and accessibility of technical communications, use consistent, common formatting, for example, for interface elements. What sounds obvious to many technical communicators is actually proven in academic studies. This post is for people looking for proof that consistency has a benefit in technical communications.

I’m taking my cue from a question that appeared in a LinkedIn group:

[I need to] find studies or tests that show that it is value-adding to have consistent formatting on e.g. User Interaction elements (such as buttons or handles) in your instructive documents. Can anyone share studies or tests in this area?

I can offer an answer on two levels:

  • The general benefits in terms of human perception
  • The particular benefits of consistent documentation

The neuroscience of consistency

Human perception favors consistency. The mind groups things easier, faster and with more confidence, if they’re consistent and have something in common.

Gestalt law of similarity illustrated

Gestalt law of similarity: The mind groups similar elements into collective entities, from Wikipedia.

Psychological studies have shown two principles by which human perception groups things: Proximity and similarity. For a comparison of these two principles and further references, see Han, S., Song, Y., et al. (2001). Neural substrates for visual perceptual grouping in humans. Psychophysiology, 38, 926-935. Han and colleagues actually quote several studies that “proximity is a more salient cue than similarity.”

In technical comunications texts, we usually can’t practically lump all names of windows or fields across all topics together to show they’re related.

But we can resort to similarity to help readers understand that we mean a GUI element at each occurrence. If we always write their names in bold, for example, readers will recognize that similarity across topics and learn to scan for it (whether they’re aware of it or not). If we always mark up GUI elements somehow, sometimes in bold, sometimes in italics, readers are more likely to wonder if the different markup has a meaning – and they won’t be able to scan the text as quickly and reliably.

For more psychological research and how it applies to technical communications, refer to Chris Atherton‘s excellent and accessible article “What and where?” in ISTC’s Communicator quarterly, Spring 2011, pp. 28-29.

Studies of applied consistency

So much for the theory. But does consistency, for example in formatting GUI elements, have an actual benefit in documentation?

One very good resource to argue for such consistency is the Research-Based Web Design & Usability Guidelines. This 292-page tome sets out “to provide quantified, peer-reviewed Web site design guidelines”. It was published by the US Department of Health and Human Services in 2006 and is available for free download, as a book and as individual chapters.

Chapter 11 on “Text Appearance” has a couple of applicable guidelines:

#2 Format common items consistently

Common, recurring items, such as telephone numbers, time records, button and window names should be formatted consistently, according to expert recommendations in: Ahlstrom, V. & Longo, K. (2001). Human factors design guide update (Report number DOT/FAA/CT-96/01): A revision to chapter 8 – computer human interface guidelines.

#4 Ensure visual consistency

Visual consistency, including the appearance of characters in interfaces, reduces user errors. “Studies found that tasks performed on more consistent interfaces resulted in (1) a reduction in task completion times; (2) a reduction in errors; (3) an increase in user satisfaction; and (4) a reduction in learning time.” The quoted studies include:

  • Adamson, P.J. & Wallace, F.L. (1997). A comparison between consistent and inconsistent graphical user interfaces. Jacksonville: University of Northern Florida, Department of Computer and Information Sciences.
  • Eberts, R.E. (1997). Cognitive modeling. In: G. Salvendy (ed.), Handbook of Human Factors and Ergonomics, 2nd ed., New York: John Wiley & Sons.
  • Ozok, A.A. & Salvendy, G. (2000). Measuring consistency of web page design and its effects on performance and satisfaction. Ergonomics, 43(4), 443-460.
  • Schneider, W., Dumais, S.T. & Shiffrin, R.M. (1984). Automatic and control processing and attention. Varieties of Attention. New York: Academic Press, 1-27.
  • Schneider, W. & Shiffrin, R.M. (1977). Controlled and automatic human information processing: detection, search and attention, Psychological Review, 84, 1-66.

Specifically, Ozok and Salvendy in 2000 confirmed the earlier studies that visually inconsistent interfaces lead users to poorer performance and more errors, see the summary in the Human Factors International newsletter.

– I hope this little field trip into academia can help you to prove that consistency not merely seems somehow desirable and logical, but has actually cognitive benefits proven in studies.

“Statistics without maths” workshop at #TCUK11

Technical Communication UK 2011 is off to good start with around 100 people attending six pre-conference half-day workshops on Tuesday. Even the night before saw about 20 attendees joining the organisers to help with last-minute setup chores, not to mention drinks and dinner.

On Tuesday afternoon, I attended the workshop “Statistics without maths: acquiring, visualising and interpreting your data” by Mike K. Smith, Chris Atherton and Karen Mardahl.

Mike K. Smith encourages us to insist on hard evidence

The workshop was virtually free of math in terms of formulas and calculations. Nonetheless, its introduction of concepts such as different average measurements mean vs. median vs. mode, or such as standard deviation vs. standard error challenged tech communicators. Personally, I’m more familiar with the finer points of language, not mathematical concepts, so it was a bit of a stretch for me.

The focus, however, was on general principles that give well-done statistics the power to infer a greater whole from representative data:

  • Strength of evidence, meaning the amount of data is large enough
  • Quality of data, meaning the data is good and useful to answer the question

A simple example illustrated these points:

1. Survey a group of people whether they like Revels, a British candy that comes with different fillings and hence different flavours, in general.

2. Hand out one Revel each to a smaller group of people and ask them how many liked the specific Revel they were given.

Frequently, the results of #2 are interpreted to mean #1. And that’s not even taking into consideration the alternative suggested by the workshop audience:

3. Watch a smaller group eat Revels (best without their knowing that they’re being watched) and draw your your conclusions how many really like Revels.

Another principle that was presented and discussed was that correlation measured by studies and statistics is not the same as causation: Two things that frequently or always occur together don’t mean that one causes the other. They could both be caused by a third overarching force. Or maybe there’s no causal relation between them at all…

The workshop about these concepts with dozens of examples also showed up a few cultural differences: Statisticians seem to strive for accuracy and precision to the point of not quite intelligible anymore, at least not outside their peer group.

I think some of the finer points about the definitions of averages and standard measurements (see above) were lost on some of us tech comm’ers. Still, the general message resonated with many: Statistics deserve close scrutiny, for the numbers they present, for the conditions in which they were measured and for the questions they seek to answer.

As Mike Smith put it towards the end:

What do we want?
Evidence-based change!
When do we want it?
After peer review!