Domain: Reference Transactions 2017

MEASURES AND METRICS

DOMAIN: REFERENCE TRANSACTIONS

A Reference Transaction is often the most common interaction between Repository staff and Users, whereby staff engage with Users to learn about their research interests and how best to employ the resources of the Repository to respond to their queries. As a personally mediated experience, Reference Transactions provide opportunities for staff to hear and gather stories from Users about the impact that archives and special collections have on people’s lives.

Basic measure (“Reference Questions”)

Count the number of unique Reference Questions received from Users regardless of the method used to submit the question.

Rationale:

Maintaining a count of Reference Questions from Users is the most basic way to track staff engagement with Users.

Guidelines for collection:

  • Count Reference Questions concerning different topics from the same individual as separate questions.
  • Exclude follow-up emails, multiple social media interactions, or other conversations on the same question.
  • Exclude directional/general information questions, tours, or instruction sessions.
  • Exclude requests from a User to use the next folder, box, item, etc., in a collection while working in the Reading Room.
  • Count questions from Users working in the Reading Room if the response requires staff to employ their knowledge or one or more information sources and the User has not already asked a question on the same topic.
  • Some Repositories may find it more practical to collect statistics for only a limited period of time rather than continuously (i.e., sampling). Academic libraries, for example, are sometimes asked to collect and report the average number of Reference Questions received in a typical week, with a typical week defined according to definitional criteria.

Application and examples:

  • If a User contacts the Repository via email with a Reference Question and then follows up with a clarifying or related question within a reasonable period of time, count this as a single Reference Question. If a User and staff exchange multiple emails related to the same research topic, the Repository should count this as a single Reference Question, but may wish to assign it a higher complexity of transaction (see complexity of transaction advanced measure in Reference Transactions domain).
  • A User calls the reference desk at a local historical society to ask a genealogical question about a particular family. The User then sends a follow-up email to the staff member with whom she spoke. The following week, she visits the society’s reading room to consult the family’s papers and asks a few questions related to her original inquiry. The Repository should count this as a single Reference Question since the topic has not changed.
  • During a lunchtime conversation, a university archivist is asked when the first glee club performance was held. That afternoon, the archivist researches the question and calls the colleague back with an answer. The archivist should count this as a Reference Question since answering it required the archivist’s knowledge and time and use of information sources to respond.

Advanced measure (“Question Method”)

Categorize the methods by which Reference Questions are received.

Rationale:

Capturing and categorizing the methods by which Reference Question are received can be a useful means of understanding how Users prefer to interact with staff and tailoring services accordingly.

Guidelines for collection:

  • Identify and categorize the methods whereby Users submit Reference Questions. Typical methods include submitting questions through an online form, sending email to a general address and/or specific staff members, using regular mail, calling a general telephone number and/or specific staff, using social media services such as Facebook and Twitter, and approaching staff in person.
  • Record the method by which Reference Questions are received when tallied and counted for purposes of obtaining the basic measure indicated above.
  • Some Repositories may find that periodic sampling for 2- to 4-week intervals rather than continuous recording yields sufficient data for assessment purposes.

Application and examples:

  • A historical society tracks the methods whereby it receives Reference Questions during March, July, and November of every year in order to understand whether Users prefer using different methods of contacting and interacting with staff at different times of year, and whether those methods are changing over time.
  • A corporate archives has instituted a reference request form and ticketing system to help manage and prioritize requests from company employees. Archives staff continue to monitor the other methods through which they receive Reference Questions in order to assess how successful they are in directing employees to use the reference form.

Advanced measure (“Time Spent Responding”)

Record the amount of time Repository staff spend managing and responding to Reference Questions.

Rationale:

Tracking the amount of time that staff spend engaging in Reference Transactions is one aspect of information gathering that can help Repository managers gauge the proportion and value of the activity in relation to other functions and create training programs and tools to help staff respond both more efficiently and more effectively to User requests.

Guidelines for collection:

  • Record the total amount of time spent managing and responding to each Reference Question. Include time spent on in-person consultations, phone calls, emails, social media replies, etc., as well as time spent conducting research required to answer the question and time spent recording and managing the Reference Transaction in a tracking system. Include time spent by all staff involved in managing and responding to the question.
  • Because managing and responding to a Reference Question can often involve the efforts of multiple staff members, recording time spent is most effectively accomplished using a centralized electronic system for managing Reference Transactions. Some such systems include a field in which time spent can be recorded and tabulated cumulative. Other systems, such as locally developed databases and spreadsheets, may be modified to allow addition of a field or column for recording time spent. Repositories without a strong technological infrastructure may find it feasible to use manual forms for recording and tabulating time spent on Reference Transactions.
  • Establish a local policy to record either the actual time spent managing and responding to the Reference Question or an estimated amount of time according to fixed intervals (e.g., 15- or 20-minute time blocks), rounded to the nearest interval. If actual times are recorded, it may still be necessary for staff to estimate the amount of time since it is common to be interrupted while researching and responding to reference questions.
  • Some Repositories may find that periodic sampling for 2- to 4-week intervals rather than continuous recording yields sufficient data for assessment purposes.

Application and examples:

  • A staff member receives a phone call from a User and spends 5 minutes discussing a Reference Question and taking notes. The staff member sends the notes via email to another staff member to ask assistance in looking up the desired information. The second staff member spends 15 minutes consulting various reference sources and then 10 minutes writing an email to the User. The User responds to thank the staff member, and the staff member spends 5 minutes updating the Repository’s Reference Transaction tracking system, including the amount of time spent on the transaction and closing it in the system. The total amount of time recorded in the system for this transaction would be 35 minutes if the method of recording actual time spent is used, or 30 minutes if fixed intervals of 15 minutes are used and rounded to the nearest interval.

Advanced measure (“Question Purpose”)

Record the purpose of Reference Questions and other service requests according to a defined rubric.

Rationale:

Recording and categorizing the purposes of Reference Questions and service requests can help a Repository to understand better which collections and services its Users most value and plan operational, communication, training, and other strategies accordingly.

Guidelines for collection:

  • To ensure consistency of data collection and facilitate tabulation of results, devise, adopt, or adapt a rubric for categorizing Reference Questions according to the subject or collection area, type of service requested, or a combination thereof. For example, a subject-based rubric might include subjects such as American history or literature, while a collection-based rubric might include major areas covered by collection groups (e.g., political archives or detective fiction) or even specific collections that are frequently used (e.g., a local authors collection). Another type of rubric might distinguish requests for information, reproductions, permissions, and other services, or might distinguish intent and/or anticipated outcomes such as fulfilment of a class assignment, academic or commercial publication, genealogical research, general interest, etc.
  • Because Reference Questions and service requests may be communicated through various means and various staff, recording the purpose of requests is most effectively accomplished using a centralized electronic system for managing Reference Transactions. Some such systems may include a field or fields for recording the purpose or purposes of transaction and facilitate their tabulation. Other systems, such as locally developed databases and spreadsheets, may be modified to allow addition of a field or column for recording transaction purposes. Repositories without a strong technological infrastructure may find it feasible to devise manual forms for recording and tabulating the purposes of Reference Questions and other service requests.
  • Online or manual forms for submitting Reference Questions or User registrations may be adapted to include a field or fields that the User can use to explain and categorize the purpose of their request or use of the Repository. If this method is used, Repositories should consider incorporating a rubric and appropriate instructions on the form so that users will describe the purpose(s) of their requests consistently and accurately to facilitate tabulation and analysis by Repository staff. Checkboxes or radio buttons are better suited to this function than free-text fields, but if the latter are used, staff may then apply a rubric to categorize the purposes described by Users.
  • Rubrics may be developed for specific Repository types, such as government archives, business archives, academic special collections and archives, historical societies, etc.

Application and examples:

  • A Repository may wish to record whether the User is seeking information about its services, its collections, or the anticipated product or outcome of the consultation. If more than one categorization scheme or rubric is employed, care should be taken to avoid conflating statistics from different categories. For example, a tally of the number of genealogical queries should not be added to the number of requests for Reproductions.
  • If a Repository wants to assess what types of information and services its Users seek most often, it may devise a rubric that allows it to categorize which requests are oriented to the purpose of obtaining specific information derived from its collections, obtaining general information about its services, or requesting services such as digital reproductions, Interlibrary Loans, etc.
  • If a Repository wants to assess the intentions that prompt Users to contact or visit the Repository, the Repository may devise a form that asks Users to select an option that best describes the primary reason for their contact or visit. Such options might include general interest, genealogical research, class assignment, academic research and publication, etc.
  • If a Repository wants to assess the anticipated outcomes of User consultations, it may devise a form that asks Users whether the anticipated product or result includes academic or commercial publication, an exhibition, a creative adaption, such as a fictional work or film, or a business or legal enterprise.

Advanced measure (“Question Complexity”)

Record the level of complexity of Reference Questions according to a defined rubric or scale.

Rationale:

Recording and categorizing the complexity of Reference Questions can help a Repository to understand better what levels of expertise and engagement are required to meet the desires of its Users.

Guidelines for collection:

  • To ensure consistency of data collection and facilitate tabulation of results, devise, adopt, or adapt a rubric for categorizing Reference Questions according to progressive levels of question complexity and resources required to respond. An example of a predefined rubric is the READ (Reference Effort Assessment Data) scale, “a six-point scale tool for recording vital supplemental qualitative statistics gathered when reference librarians assist users with their inquiries or research-related activities by placing an emphasis on recording the effort, skills, knowledge, teaching moment, techniques and tools utilized by the librarian during a reference transaction.” (See: http://readscale.org/) Examples of predefined categories include: ready reference, research assistance, and research consultation meeting.
  • Rubrics and scales may be useful to identify question complexities and can be developed for individual Repository needs or specific Repository types, such as government archives, business archives, academic special collections and archives, historical societies, etc.

Application and examples:

  • Archivists and librarians at an academic library that employs the READ scale for assessing its general reference operations have adapted the scale to provide comparable ratings for archival reference: (1) directional (location, hours); (2) technical support (instruction in use of catalog, finding aids); (3) basic reference (answering specific informational question in less than 15 minutes); and (4-6) three levels of advanced reference categorized according to the level of professional archival knowledge, number of resources and staff engaged, number of interactions with User, and overall time spent.

Recommended metrics

Total number of Reference Questions received per week/month/year

  • Tabulating the total number of Reference Questions received over given periods of time and comparing totals across periods can reveal patterns in User demands. For instance, Reference Questions might increase before an academic institution’s annual alumni weekend or towards the end of academic terms when class assignments are due.

Total number of Reference Questions received per day/week/month/year via each method

  • Tabulating and comparing the number of Reference Questions received via each method tracked by the Repository can reveal patterns and trends in the means Users use to engage Repository staff and collections over time, which may lead the Repository to reevaluate its staffing and Reference Transaction management systems.

Average number of minutes spent responding to Reference Questions

  • Calculate the total number of minutes spent managing and responding to Reference Questions, and divide the total by the number of Reference Questions received during the same period.
  • A Repository may use this metric to forecast staffing needs as the number of Reference Questions increases or decreases over time. Another Repository may use this metric to assess the impact of the implementation of a reference training program or a new Reference Transaction management system on efficiency in managing and responding to Reference Questions.

Average number of minutes spent responding to internal vs. external Users

  • A Repository may wish to assess the relative amount of time it devotes to serving internal/associated Users versus external/unassociated Users as defined by the basic measure for User Demographics. To do so, the Repository would need to calculate separately the total number of minutes staff spent managing and responding to Reference Questions from internal and external Users and dividing the respective totals by the number of questions received from each User category.
  • Comparing the average length of time spent responding to internal versus external Users may point to a need to review the Repository’s mission or User service policies.

Ratio of time spent responding to Reference Questions to time Users spend in the Reading Room

  • Repositories may find it interesting to monitor the ratio of time that staff spend managing and responding to Reference Questions to the amount of time that Users spend consulting collection materials in the Reading Room (see Reader Hours advanced measure under the Reading Room Visits domain).

Ratio of Reference Questions submitted by each User demographic category

  • Repositories that track the advanced measure in the user demographics domain for user affiliation, may find it useful to compare the proportion of Reference Questions received from Users in each category over different intervals of time.

 

Next: Domain: Reading Room Visits

 

Table of Contents

Introduction

Measures and Metrics:

Appendix A: Glossary

cdupont says:
Directing patrons to other archives

-          I wonder if an example to clarify the boundary between reference questions and directional questions would help. This is based on a relatively recent change to our own practice. Our archives used to count referrals to other archives as directional questions, since we were “directing” patrons to that archives. (This comes up a lot since there is another archives in our building.) When did some work to align our practices with library reporting (using ARL definitions), we changed that to reference questions. That is, if the question is “where is XXX Archives”, that would be a directional question. If the question is open ended, requiring that archives staff use their knowledge of another archives’ holdings or general information about archives in the area (or a search a consortial database), we now interpret this as a reference question. This might be obvious but I mention it in case others might fall into the same trap.