Technical Translation: Usability Strategies for Translating Technical Documentation
Download 2.88 Mb. Pdf ko'rish
|
byrne jody technical translation usability strategies for tr
Criteria
From the relatively long list of criteria used to assess usability, it emerged that many were unsuitable for use in this study, either because relevant events did not occur with enough frequency to justify their inclusion or because they were impossible to quantify given the procedures and equip- phrased which completing of other phenomena which were observed during the test. The pilot study also made it clear that additional work was needed to establish exactly how the criteria should be applied in the main study. When analysing the error criteria data, it was apparent that the method for handling data from the Post-Test Survey was not appropriate (see page 234 the mouse while others immediately began re-reading the user guide. It day, the administrator may give more or less information to the par Scripting the instructions given by the administrator, coupled with rule out the possibility of the administrator making ad hoc com which might vary from session to session and consequently biasing or phrased slightly to make it easier to quantify them. An example of a re criterion is “Errors at point where subject thinks task completed” was rephrased as “Number of times subject stops work withou t a task”. It was also necessary to add additional criteria as a result the amount variability which may inadvertently come as a result of unscrip- ted instructions from the test administrator. For example, o n any particular ment in use. As a result, some criteria were deleted while others were re strictly enforced procedure for dealing with questions (see page 226) Experiment to Test the Impact of Iconic Linkage the other error criteria. Apart from faulty logic, subtracting the number of correct answers from the total errors meant that any attempts to represent the data in the form of a graph produced seriously skewed results. In addition, it was felt that the application of error criteria was subjective and, at times, inconsistent. This can be attributed to the fact that a single person was responsible for determining whether an incident actually corre- sponded to one of the criteria. It was decided, therefore, to clearly define each criteria and what each one involved. The following list presents the be observed in order to be recorded. stops working to read the user guide. each participant in the post-task survey. not associated with the task currently being performed. not associated with the task currently being performed OR where a user scrolls through several menus without choosing an option. pant asks a question relating to the way a task should be performed or whether a task has been completed. presses frustration verbally or where a participant’s body language (e.g. sigh- ing) or facial expressions (e.g. frowning) indicate frustration. presses confusion verbally or where a participants body language (e.g. head- scratching) or facial expressions indicate frustration. presses confusion verbally or where a participant’s body language or facial expressions indicate frustration (e.g. smiling). 235 Criterion 1 Tasks Not Completed : The failure of a user to complete a ta sk Criterion 2 Times User Guide Used : Each occasion where the participants Criterion 3 PTS Score : The number of questions answered incorrectly by Criterion 4 Incorrect Icon Choices : Where a user clicks an icon which is Criterion 5 Incorrect Menu Choices : Where a user chooses a menu option Criterion 6 Verbal Interactions/Questions : Each occasion where a partici- Criterion 7 Observations of Frustration : Incidents where a participant ex- Criterion 8 Observations of Confusion : Incidents where a participant ex- Criterion 9 Observations of Satisfaction : Incidents where a participant ex- modified error criteria and the definition of each criterion which must despite the administrator identifying the precise page in the user guide; ultimately resulting in the administrator giving explicit verbal instructions on how to complete the task. 231). Instead, the number of iincorrect answers was added to the totals for Assessing Usability uses an incorrect shortcut key or types incorrect commands into a field. Criterion 11 Stopped Work without Completing Task : Each instance where a participant mistakenly believes the task to be complete or where a Download 2.88 Mb. Do'stlaringiz bilan baham: |
Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling
ma'muriyatiga murojaat qiling