Technical Translation: Usability Strategies for Translating Technical Documentation


Usability Evaluation Case Studies


Download 2.88 Mb.
Pdf ko'rish
bet149/187
Sana03.12.2023
Hajmi2.88 Mb.
#1801392
1   ...   145   146   147   148   149   150   151   152   ...   187
Bog'liq
byrne jody technical translation usability strategies for tr

Usability Evaluation Case Studies 
There are numerous published accounts of usability studies conducted in a 
variety of contexts. These studies are of varying relevance to user guides but 
many of them provide valuable insights into the practicalities of conducting 
usability evaluations. 
One such study was carried out by Simpson (1990) and despite the fact 
that it is a developmental evaluation model and it groups print documenta
199
tion
the number of times a manual is 
the number of pages looked at on 
the number of searches in the 
table of contents on each visit to 
the manual 
the number of repeated visits to 
reference card is consulted 
the number of searches in the 
index on each visit to the 
manual 

















Assessing Usability 
together, it does discuss two examples of usability testing which give some
tutorial (CBT). 
A crucial question investigators must ask themselves, Simpson asserts 
(1990:42), is what specific data is sought. Simpson maintains that the decid-
ing factor in choosing an evaluation method is the type of usability infor-
mation needed. He proposes the following stages for any form of testing 
(1990:45): 
define the test question 
decide what data is needed in order to answer these questions 
select methods for getting this data 
plan how the methods should be implemented 
By his own admission, this process is rarely as straightforward as it seems. 
Beyond this overview, however, Simpson provides little useful practical ad-
vice. 
Another study, carried out by Harrison and Mancey (1998), compares 
two versions of an online, web-based manual and examines the optimum 
elapsed time before gathering users’ reactions to the different designs. 
Rather than examining textual or content-related factors, this study com-
pared different navigation models and their effect on usability. Although 
this study also treats online and print documentation identically and its ob-
jectives are dissimilar to our objectives here, it provides a useful insight into 
procedures for gathering data using user surveys. 
As a way of testing how well users learn and remember information from 
a manual, the study used a series of questions based on the information con-
tained in the manual. There were eight groups of twelve questions which 
took the form of cloze tests which could be answered with a single one, 
two or three word response. Such a method could be used to test the no-
tion that usable texts promote the retention of information over time. 
Interestingly, this study also utilised a written script for researchers to fol-
low during tests to ensure consistency for all subjects. The authors do not, 
however, give any details of the actual tasks involved or the efficiency and 
error criteria employed (if any). This can be attributed to the fact that the 
aim of the study was not actually concerned with measuring usability 
per se
.
200
useful practical guidance for conducting evaluations. One of the studies
involved testing online help while the other involved a computer-based 






Usability Evaluation Procedures
The main finding of the study was that the length of time a user spends 
working with a product before being asked to give an evaluation affects the 
final evaluation. However, the authors found that evaluations stabilised after 
working with the product for 15 minutes. This also shows that think-aloud 
protocols, argued to be more accurate because of the immediacy of re-
sponses, are unnecessary for the purposes of gauging user satisfaction and 
opinions as there is no pressing need for immediate feedback. 
Teague 
et al. 
(2001) conducted a series of tests at Intel Corp. in Oregon 
with the similar aim of establishing whether there were significant differ-
ences when users are asked to rate ease of use and satisfaction during and af-
ter tests. A total of 28 subjects were recruited to perform a variety of tasks 
using a range of commercial websites. Tested individually, subjects in the 
two groups were asked questions at either 30 or 120 second intervals while 
performing the tasks. The questions were based on seven-point Likert scales 
and subjects had to answer each question orally during the task. After the 
task, the subjects were asked to answer the questions again in writing. A 
third group, who did not answer questions during the task, only answered 
the questions in writing.
The results of this study appeared to indicate that post-task responses 
were “inflated” and that users gave more honest and representative answers 
during the task. Not only is this finding in conflict with Harrison & 
Mancey (1998), but it can be argued that there were other psychological 
factors at work which can account for this phenomenon. According to 
various social psychologists, most notably Asch (1956) and Sherif (1937), 
conformity and the desire to conform and be accepted can frequently cause 
people to give “false” or less than truthful answers, even though they do 
not reflect what a person actually thinks. This desire to conform is most 
pronounced when subjects are asked to publicly verbalise their responses. In 
comparison, the need to conform is less obvious where subjects are asked to 
write down their responses in private (Deutsch & Gerard 1955). Thus, it is 
reasonable to assume that the “inflated” results in the post-task survey are 
actually more indicative of the subjects’ real ratings than the verbal, concur-
rent ratings. In any case, it can also be argued that subjects’ responses only 
stabilised after completing the tasks (as mentioned previously by Harrison & 
Mancey 1998). It is possible that, for whatever reason, the subjects were 
(unwittingly) biased into giving negative answers because they thought that 
that was what was expected of them. 
Another possible explanation can be deduced from the finding that sub-
jects who only answered questions in the post-task evaluation performed 
their tasks more quickly than the concurrent groups. The concurrent 
groups took on average 15% longer to perform the tasks and found the tasks 
201


Assessing Usability 
significantly less enjoyable. We can attribute this to the regular interruption 
and distraction caused by the questions and the subsequent need to refocus 
on the task at hand. Such activities require additional cognitive effort and as 
such increase the workload, fatigue and stress for subjects. It is clear, there-
fore, that post-task evaluation appears to be a more considerate and indeed 
accurate means of data collection than any concurrent form of questioning. 
In contrast to the preceding studies, Zirinsky (1987) provides a detailed 
and useful discussion of usability evaluation aimed specifically at printed 
documentation. Zirinsky starts off by stating that in a usability test involving 
users, we want users to tell us what they dislike about a product, not what 
they like (1987:62). The role of testers, he continues is to find problems, 
not to impress researchers with expert performance. A similar point is made 
by Redish and Dumas (1993:276) who emphasise that users should realise 
that they are not being tested. 
Zirinsky provides a number of recommendations for those preparing to 
conduct a usability test. The first of these is that all of the test materials 
(1987:62) should be edited. As part of the editing process, it is essential that 
there are no typographical errors, style inconsistencies, grammatical or 
punctuation errors which can distract subjects or even cause them to doubt 
the validity of the technical material presented. This leads on to checking 
the document for both technical content and linguistic accuracy. Zirinsky 
maintains that a manual will improve by no more than 20% as a result of a 
review, so the better the quality of the product to start with, the better it 
will be after being reviewed and tested. 
As regards actually conducting the test, Zirinsky asserts that users should 
remain objective and should be fully briefed about the product and their 
role in the test. They should only be provided with enough information to 
ensure that they fully understand what is expected of them. Subjects should 
not be told what the researchers are looking for, i.e. they should be told 
that they are looking to see which of two versions of a user guide is better, 
not that we are looking to see what effect repetition has on a document’s 
usability. Furthermore, subjects must be made to feel relaxed and confident 
enough to make constructive criticisms and comments regarding the docu-
ment. 
It is clear from the previous studies that many of the approaches simply 
do not apply completely to this study, even though several of the studies 
provide useful practical pointers. Of the literature reviewed, only two stud-
ies specifically set out to conduct comparative usability tests on print docu-
mentation where the object is to gauge the effect of a single, non-technical 
variable. As such, these studies provide a broad framework or model for 
202


Usability Evaluation Procedures
conducting an empirical study to test the effect of Iconic Linkage. The first 
of these, conducted by Foss 
et al.
in 1981, aimed to improve usability and 
accelerate learning by examining the use of supplementary information and 
the effect of restructuring a user guide. The second study was conducted by 
these
studies in detail. 

Download 2.88 Mb.

Do'stlaringiz bilan baham:
1   ...   145   146   147   148   149   150   151   152   ...   187




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling