“Statistics without maths” workshop at #TCUK11

Technical Communication UK 2011 is off to good start with around 100 people attending six pre-conference half-day workshops on Tuesday. Even the night before saw about 20 attendees joining the organisers to help with last-minute setup chores, not to mention drinks and dinner.

On Tuesday afternoon, I attended the workshop “Statistics without maths: acquiring, visualising and interpreting your data” by Mike K. Smith, Chris Atherton and Karen Mardahl.

Mike K. Smith encourages us to insist on hard evidence

The workshop was virtually free of math in terms of formulas and calculations. Nonetheless, its introduction of concepts such as different average measurements mean vs. median vs. mode, or such as standard deviation vs. standard error challenged tech communicators. Personally, I’m more familiar with the finer points of language, not mathematical concepts, so it was a bit of a stretch for me.

The focus, however, was on general principles that give well-done statistics the power to infer a greater whole from representative data:

  • Strength of evidence, meaning the amount of data is large enough
  • Quality of data, meaning the data is good and useful to answer the question

A simple example illustrated these points:

1. Survey a group of people whether they like Revels, a British candy that comes with different fillings and hence different flavours, in general.

2. Hand out one Revel each to a smaller group of people and ask them how many liked the specific Revel they were given.

Frequently, the results of #2 are interpreted to mean #1. And that’s not even taking into consideration the alternative suggested by the workshop audience:

3. Watch a smaller group eat Revels (best without their knowing that they’re being watched) and draw your your conclusions how many really like Revels.

Another principle that was presented and discussed was that correlation measured by studies and statistics is not the same as causation: Two things that frequently or always occur together don’t mean that one causes the other. They could both be caused by a third overarching force. Or maybe there’s no causal relation between them at all…

The workshop about these concepts with dozens of examples also showed up a few cultural differences: Statisticians seem to strive for accuracy and precision to the point of not quite intelligible anymore, at least not outside their peer group.

I think some of the finer points about the definitions of averages and standard measurements (see above) were lost on some of us tech comm’ers. Still, the general message resonated with many: Statistics deserve close scrutiny, for the numbers they present, for the conditions in which they were measured and for the questions they seek to answer.

As Mike Smith put it towards the end:

What do we want?
Evidence-based change!
When do we want it?
After peer review!

Leave a comment