Technical Translation: Usability Strategies for Translating Technical Documentation


Download 2.88 Mb.
Pdf ko'rish
bet133/187
Sana03.12.2023
Hajmi2.88 Mb.
#1801392
1   ...   129   130   131   132   133   134   135   136   ...   187
Bog'liq
byrne jody technical translation usability strategies for tr

Empirical Usability Evaluation 
Regardless of the time and effort spent on engineering usability into an in-
terface using the variety of methods outlined in previous chapters, the only 
true way of establishing whether the interface is indeed usable is to conduct 
some form of usability evaluation. There are essentially two types of evalua-
tion – formative evaluation and summative evaluation. The type of evalua-
tion used depends on when the evaluation is to be carried out as well as 
what it is hoped the evaluation will achieve. 
Formative evaluation 
takes place during the development process in or-
der to detect potential problems before the design is actually implemented. 
177


Assessing Usability 
The aim here is to improve the usability of the final interface (Preece 
1993:108; Preece 1994:603). 
Summative evaluation
, in contrast, is carried out on the finished inter-
face. The aim here is to determine the level of usability of the interface so 
that judgements can be made as to the overall usability and quality of the 
interface (Preece 1994:103). Summative evaluation is used to ensure that 
“the final product is up to standard, not to shape the design and develop-
ment processes” (Landauer 1997:204). 
For the purposes of assessing a finished translation, we are interested in 
assessing the overall level of usability of the final product, i.e. the user 
guide. For this reason, we will restrict ourselves to an examination of us-
ability evaluation from a summative point of view. 
Historical Problems in Document Usability Research 
A review of the literature on usability evaluation reveals a range of publica-
tions concerned with improving the usability of software documentation. 
However, much of the research undertaken in recent years is less than satis-
factory for our purposes here for a number of reasons. 
First of all, the term 
documentation
as used by several authors proves 
problematic, not only in terms of the aims of this book, but also in terms of 
what happens in an industrial context. Documentation is frequently defined 
in an extremely broad sense as including practically anything that involves 
some form of text. Indeed, Mamone (2000:26) defines documentation as
…user manuals, online help, design features and specifications, source 
code comments, test plans and test reports, and anything else written 
that explains how something should work or be used. 
While this is perfectly acceptable and indeed suitably comprehensive in 
many ways, it is problematic from the point of view of usability evaluation. 
Mamone’s assertion (
ibid.
) that documentation can come in “hard or soft 
form” fails to take into account the fact that while technology and econom-
ics have pushed the online delivery of information, online and hardcopy 
documentation have specific strengths and weaknesses (Smart & Whiting 
1994:7). As such, they cannot be assessed using identical sets of criteria.
Indeed, the all-inclusive definition of documentation includes online 
help. In addition to the obvious textual issues involved in using help and 
178


Approaches to Empirical Evaluation
the fact that information is presented as short, iindependent units, the diver-
sity and sophistication of help systems mean that there is quite a different set 
of considerations (such as navigation models, resolution, display speeds, 
download times, delivery options etc.) to be borne in mind when evaluat-
ing their usability in comparison with that of hardcopy documentation. So, 
while Mamone provides a useful overview of usability test procedures, the 
failure to identify the differences and the similarities between hardcopy 
documentation and online help compromises his approach. But it is not just 
Mamone who fails to make this distinction, Prescott & Crichton (1999), 
Harrison & Mancey (1998), Simpson (1990) and Mehlenbacher (1993), to 
name a few, either concentrate on online texts or group online texts to-
gether with print documentation. 
Although the practical issues relating to the evaluation of online docu-
mentation mean that online and print documentation cannot reasonably be 
assessed using a single theoretical framework, other factors justify the sepa-
rate treatment of the two types of text. By grouping print and online texts 
together, print documentation risks being regarded as a “low-tech” form of 
documentation which can be assessed with just a subset of the online 
evaluation paradigm. Admittedly, online documentation is increasingly re-
garded as an integral part of products which speeds up the dissemination of 
information and which – if designed well – can allow users to access infor-
mation more quickly. Nevertheless, a large proportion of users prefer the 
“book” format. According to Smart & Whiting (1994:7) “some informa-
tion is best accessed from a printed form.” Such information includes trou-
ble shooting, lengthy conceptual descriptions or material where annotation 
is essential. With this in mind, we can see the need to examine documenta-
tion purely from a print point of view. 

Download 2.88 Mb.

Do'stlaringiz bilan baham:
1   ...   129   130   131   132   133   134   135   136   ...   187




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling