PEERRS Usability Evaluations

image from final report

This is the re-designed home page of the PEERRS site. The new design took into account our recommendations to move the START button "above the fold" and make the registration process more clear for first time users.

Skills Used

  • Personas & Scenarios
  • Comparative Evaluation
  • Surveys
  • Heuristic Evaluation
  • Usability Study
  • Vocabulary Analysis


The 7 projects detailed below were completed for the class "Usability of Systems and Services" led by Judy Olson. The goal of the class overall was to learn about and perform many different methods of system evaluation.

All projects were carried out with the same group and the same client, the Program for Education and Evaluation in Responsible Research and Scholarship (PEERRS). PEERRS is a University of Michigan program that requires all primary researchers to complete learning modules and pass a short quiz via the PEERRS website.

All reports were given to PEERRS, and all but one of our final recommendations were implemented on the site. (The unimplemented recommendation would require significant time and expertise resources)

What we did


The GTN was our first PEERRS project, in which we visually laid out the content and functionality of the entire website. This helped us get acquainted with the site and identify potential usability issues that we would explore with later evaluation methods.

Personas & Scenarios

For this individual assignment, I interviewed five typical but diverse users of PEERRS. From these interviews, I created three personas and two scenarios. At the end of this assignment, the members of my group pooled our individual personas and chose four to use during the course of the semester.

Comparative Evaluation

For the comparative evaluation, we identified three other college research certification websites and one free online tutorial website. Aspects of the sites that were evaluated include navigation, testing, and feedback. When comparing these sites to PEERRS, we found that one of the academic sites was superior to PEERRS in navigation and user understanding of where they are within the site.


Our goal for the survey was to obtain feedback from members of the University community who used PEERRS to evaluate the system with three metrics: affect (how users felt about the system), functionality (what users could do with the system), and usability (did the system make it easy for users to accomplish their goals?).

The pool of participants were chosen randomly from the 7,500 registered users of the system; 26 out of 100 invitees responded to and completed the survey. In addition to affect, functionality, and usability questions we also collected demographic data. While we found that some PEERRS users felt more qualified to perform research after participating, others were skipping the learning modules entirely and taking the test repeatedly until they passed. (There was no penalty for failing any number of times.) These findings suggested that the PEERRS architecture should be re-evaluated.

Heuristic Evaluation

For the heuristic evaluation, we used Olson's 28 Heuristics and our personas to determine areas of the site that needed attention. We ranked each finding on a severity scale to be presented to developers. One severe violation of a heuristic we found was that the place for finding help was not labeled "Help." This has since been changed on the site.

Usability Study

We completed all aspects of a usability study including determining the focus of the study, recruiting users, planning the test sequence and hardware, writing the script, conducing the tests, analyzing the results, and recommending changes based on those results.

For the study, we tested five subjects representing the demographic makeup of the PEERRS user base. Using Apple Remote Desktop and an iSight camera we were able to observe and record the sessions from a distance while one group member conducted the test. Subjects were required to think out loud during this process. The study included a pre-test questionnaire for each subject to gauge their computer experience as well as a post-test questionnaire to gather further information about their experience.

Vocabulary Analysis

The goal of the vocabulary analysis was to evaluate whether or not the site's language was natural and consistent, evaluate the content for readability and complexity, and determine whether or not the site's metaphors were appropriate. Methods used for this evaluation were a metaphor analysis, an object/action analysis, and a readability analysis. One finding, for example, was that using the term "module" frequently was not a useful metaphor for the learning units, as several users were unsure what "module" referred to.

Group Members

  • K. Frassrand
  • P. Glowacki
  • T. McCarley