Technical Communication UK 2011 is off to good start with around 100 people attending six pre-conference half-day workshops on Tuesday. Even the night before saw about 20 attendees joining the organisers to help with last-minute setup chores, not to mention drinks and dinner.
On Tuesday morning, I attended Alice Jane Emanuel‘s workshop “The Tech Author Slide Rule: Measuring and improving documentation quality“. In a lively and engaging session, “AJ” taught us how to use the slide rule she came up with. It is actually an Excel spreadsheet that helps you measure qualities such as structure, navigation, language, and task orientation. You weigh a good 30 or so of such qualities in documentation, depending how important they are to you. Then you can grade a document (or a collection of topics after optional tweaking) by assigning points for each quality. The sheet sums up the weighed points per category and also for a total score.
While the sheet is excellent to track progress over time, you can see results very quickly by comparing your current documentation with legacy deliverables. The quantified approach offers a range of benefits that are otherwise hard to come by for tech writers:
- The numbered scores appeal to managers and make to easier for writers to show progress and accountability.
- The standardized categories can help you to build a team by ensuring that everyone focuses on the same qualities and by pointing to problems where individual documents go off the rails. They also help to train new writers.
- In general, it helps to raise the profile of technical communication by clarifying its contribution and giving everyone in the organization more specific terms and numbers to discuss.
AJ emphasized that you need to keep the tool’s categories and usage consistent: It’s fine to change or add categories, weights and ranges of available weights and scores, but remember that you jeopardize comparability of results when you do. It may be fine to add a handicap for special cases, but in general, beware of grade inflation and keep your grading consistent.
I think the tool is a great addition to any peer review/editing process when fellow tech writers assess style guide compliance. Given it’s granularity of dozens of weighted criteria, I expect it would be most valuable to improve writing that’s problematic in specific categories. When different writers assess the quality of different deliverables over time, I’m not sure if the grading is consistent enough and the one total score is indicative enough to track progress in a meaningful quantifiable way. However, I believe it could still show relative improvement.
I think it’s very much worth checking out AJ Emanuel’s slide rule, and it’s easy to test drive it:
- Download the tool from AJ’s website Comma Theory where you can also find additional information.
- If you want to, tweak the categories (for example, by comparing it with Gretchen Hargis’s qualities in her book Developing Quality Technical Information: A Handbook for Writers and Editors.)
- Quickly grade a (short) document in a legacy version which has since seen significant improvement and in the current version.
- Evaluate the scores and test them on colleagues or managers.