- Information
- AI Chat
Was this document helpful?
A Comparison of Questionnaires for Assessing
Course: Public International Law
116 Documents
Students shared 116 documents in this course
University: Universiti Kebangsaan Malaysia
Was this document helpful?
UPA 2004 Presentation—Page 1
A Comparison of Questionnaires for Assessing Website Usability
Thomas S. Tullis and Jacqueline N. Stetson
Human Interface Design Department, Fidelity Center for Applied Technology
Fidelity Investments
82 Devonshire St., V4A
Boston, MA 02109
Contact: tom.tullis@fidelity.com
ABSTRACT:
Five questionnaires for assessing the usability of a website were compared in a study with
123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of
Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for
several years. Each participant performed two tasks on each of two websites:
finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was
significantly preferred over the other. The data were analyzed to determine what the
results would have been at different sample sizes from 6 to 14. At a sample size of 6, only
30-40% of the samples would have identified that one of the sites was significantly
preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two
of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at
least 90% of the time.
Introduction
A variety of questionnaires have been used and reported in the literature for assessing the
perceived usability of interactive systems, including QUIS [3], SUS [2], CSUQ [4], and
Microsoft’s Product Reaction Cards [1]. (See [5] for an overview.) In our Usability Lab, we
have been using our own questionnaire for the past several years for assessing subjective
reactions that participants in a usability test had to a web site. However, we had concerns
about the reliability of our questionnaire (and others) given the relatively small number of
participants in most typical usability tests. Consequently, we decided to conduct a study to
determine the effectiveness of some of the standard questionnaires, plus our own, at
various sample sizes. Our focus was specifically on websites.
Method
We decided to limit ourselves to our own questionnaire plus those in the published literature
that we believed could be adapted to evaluating websites. The questionnaires we used were
as follows (illustrated in Appendix A):
1. SUS (System Usability Scale)—This questionnaire, developed at Digital Equipment
Corp., consists of ten questions. It was adapted by replacing the word “system” in
every question with “website”. Each question is a statement and a rating on a five-
point scale of “Strongly Disagree” to “Strongly Agree”.
2. QUIS (Questionnaire for User Interface Satisfaction)—The original questionnaire,
developed at the University of Maryland, was composed of 27 questions. We
dropped three that did not seem to be appropriate to websites (e.g., “Remembering
names and use of commands”). The term “system” was replaced by “website”, and
the term “screen” was generally replaced by “web page”. Each question is a rating
on a ten-point scale with appropriate anchors at each end (e.g., “Overall Reaction to
the Website: Terrible … Wonderful”).
3. CSUQ (Computer System Usability Questionnaire)—This questionnaire, developed at
IBM, is composed of 19 questions. The term “system” or “computer system” was
A Comparison of Questionnaires for Assessing Website Usability