Spring 2014 Meeting:
Sample Index Peer Review
By Sue Klefstad
When Team Heartland gathered for our Spring 2014 meeting, there were just six of us. This small size made it possible for us to peer review sample indexes as a group. This way, we all got to hear all of the comments and suggestions.
One general comment in the process was that no one sends out a sample index unless it is requested. This contradicts the editor statement reported by Sylvia Coates in the ASI book Marketing Your Indexing Services, 3rd ed., edited by Anne Leach, that a résumé accompanied by a sample index carried more weight.
In the context of index peer review, we reviewed the Wilson Award Evaluation Worksheet handed out by Margie Towery at our 2013 Spring meeting. This information is also on the ASI website. In 2011 the H.W. Wilson Company merged with EBSCO Publishing. The award is now called the EIS Award for Excellence in Indexing. “EIS” stands for EBSCO Information Services—an acronym of an acronym. The “Award for Excellence in Indexing” was part of the name when it was the Wilson Award, too, because that’s the point of the award: excellence in production and presentation of an index. The indexer produces the index and the publisher presents it, so the award is to both.
This worksheet covers the main points of index evaluation:
The Heartland group then read over the Institute for Certified Indexers list of indexing best practices, which follows. For an explanation of the items, see the ICI website.
As a former packager, one general index review comment I would make is to include a header in the .doc file of the index that you send to the client. Put the name of the book or project in the header, as well as page numbers. This is a simple courtesy to your client that polishes your brand.
Associated with index review is usability. A 2000 Key Words article by Christine Nelsen Ryan and Sandra Henselmeier (“Usability Testing at Macmillan USA,” Key Words Vol. 8 No. 6, p. 189, 199-202) relates an index usability study conducted by Macmillan USA after their head of indexing attended an ASI national conference session presented by Dick Evans.
The first step involved deciding which books to test. Evans suggested using books going into multiple editions so that the indexes can be improved by the testing. Macmillan USA then developed a usability test for each book to be tested. They decided to give the participants a list of questions and have them write down the number of the page where the answer could be found. Questions were developed by both reading the book and looking for confusing index entries, such as cross-references. They avoided using the text’s wording in the question, instead trying for real life wording. For example, a question about Excel’s split screen function asked, “How do you view the beginning and end of a long document at the same time?”
The third step in the process was to determine who the participants would be. They used Macmillan USA indexers to administer the test to the participants, so that the indexers were directly involved in observing index users. For this first usability test, Macmillan USA used volunteer employees. Volunteers were asked to donate 60–90 minutes of their time; they were offered refreshments and a free movie pass as incentive.
Macmillan USA developed a set of questionnaires for the usability testing process. The pre–usability test questionnaire was delivered at some point prior to the actual test and told participants what to expect during the test. It also asked about familiarity with the topic of the book being tested and index use habits. There was also a post–usability test questionnaire that asked the participants to critique the process.
Each of the 22 participants was assigned an observer. The participants and their observers first gathered for a discussion of the process and to answer any questions. Then the participant–observer pairs spread out to cubicles for the usability test. Each in their own cubicle, the pair shared a desk.
Participants were told to think aloud as they worked to answer questions. This was to aid the observers, but in the post-test questionnaire, participants said that thinking aloud helped them. Observers were surprised at the choice of terms participants tried to look up in the index. Observers also learned that See cross-references did not confuse people but See also cross-references did: For example, if the participant was at the “Web pages" entry and saw a entry and saw a “See also Websites” cross-reference, that participant expected to see a “Web pages” subheading under “Websites.”
One comment often repeated by participants was how much easier it was to work with indexes that had bold main entry headings. Our Heartland group agreed that this is a publisher design issue, for the most part, not an indexer issue.
One lesson Macmillan USA learned from their usability test process was that they needed to involve the content people when building the usability test questions. Without that input on this first test, many of the questions were not understood by the participants. They hoped that with content area input, they would have questions that people might typically ask.
The indexing lessons learned were as follows:
So the lessons learned began and ended with the same directive: more entry points!
The Pacific Northwest Chapter of ASI has been studying index usability since 2002. In a 2009 article (“Experience an Index Usability Test,” Key Words, Vol. 17 No. 4, October–December 2009, p. 130, 132–133) author Cheryl Landes describes a Pacific Northwest Chapter booth at an ASI annual meeting in which they conducted an index usability test.
The chapter had two members index the same book of essays; one was a scholarly indexer and the other a technical indexer. The first lesson learned was that indexers’ backgrounds significantly affected how they wrote entries and structured the index.
At the ASI conference booth, the chapter invited people to use each of the two indexes to answer five questions about the book. The conference participants tended to prefer using the scholarly index but answered more questions correctly using the technical index.
The chapter also conducted this usability test with a couple of student groups, one group associated with publishing and the other not. These tests showed that the backgrounds of the index users greatly affected their efficiency in using an index.
Another index usability study was published in The Indexer (“Let’s get usable! Usability studies for indexes.” Vol. 22 No. 2 October 2000, p. 91–95). When author Susan C. Olason became an indexer, many of her friends complained about how confusing indexes are. Finding little published information on usability, the author used her systems engineering and human factors background to conduct her own usability tests.
Her team worked to develop questions that went beyond keyword searches and that allowed use of the entire index without guiding the test participant. The questions were generalized into tasks. Example questions included:
The test participants were asked to point with their index fingers at what they were doing so that their access paths could be followed. Combined with discussion after the test, the observer was able to trace the path followed by the index user.
Their study determined that indented indexes were 60% more efficient than run-in. Another determination was that prefix prepositions and conjunctions added to index inefficiency.
Users who were familiar or somewhat familiar with the book subject matter searched for specific terms. Subject area novice users searched first for an index entry based on the title of the book. The “table of contents–type” index entries were helpful to novice users, as well as experienced subject matter users. Indexes with such TOC-type entries that contained cross-references to other index entries had higher efficiencies and higher usefulness ratings from users of all subject experience levels.
Indexing is a profession with little feedback; usability studies are welcome food for thought.
© 2014 by Heartland Chapter of ASI. All rights reserved.