Designing for usability is a user-centered approach to design. Jordan lays out the different considerations to keep in mind during the design process and proposes a set of methods (some empirical, others non-empirical) that can be used as testing tools and discusses the advantages and disadvantages that come with each method. His writing allows designers to understand better that there are different ways of testing users and that some styles are more appropriate depending on the content of tasks involved in using a given product or interface and its relation to a demographic/technographic target audience.
“Designing for Usability”
The author emphasizes the need for design to define its target audience (i.e. general public, expert group, etc.) in terms of its physical (height, reach, strength), and cognitive characteristics (specialist knowledge, attitudes, expectations). Understanding an audience’s characteristics is the first step that allows a better sense of usability requirements involved in the design of user-centered products.
Empirical methods –such as focus groups, interviews, questionnaires, and so forth– provide designers with the adequate information to understand its audience in terms of its attitudes and lifestyles; that is, being aware of the contexts in which the products will be used, for what purpose or specific task, and what other sort of activities a given audience would be likely to be engaged in at the same time. Those processes also help designers calibrate the relevance of the features of their products in accordance to audience needs, beliefs, and expectations. Some products may need to meet certain “legislative measures”. For instance, the author gives the example of a stereo interface in a car system and how positioning the access to volume controls may affect safety requirements.
Moreover, Jordan writes of “Iterative Design”. Iterative design entails a series of evaluations of a sequence of prototypes. This means creating a concept for an initial evaluation, then though empirical methods testing the given product, define specific problems, and returning to second iteration, and a third, and fourth… until the product is deemed usable and appropriate to its audience. He lists different types of prototypes that can be employed for presentation and/or usability testing: written or oral descriptions (specs), visual prototypes, models (physical representations), screen-based interactive prototypes (simulated interactions), and fully working prototypes (pseudo-product).
“Methods for Usability Evaluation”
Furthermore, Jordan offers a detailed description of techniques for observing users and evaluating the ‘ease of use’ in user-centered products. He proposes practical methods for designers for “uncovering unexpected usability problems” (p.51), which he differentiates as empirical and non-empirical.
Empirical Methods:
“Private Camera Conversations” allows access to information concerning user lifestyle and attitudes. Here, participants are recorded on tape as they reveal the positive and negative aspects of their experience in terms of ease of use, and purpose in their everyday. Usually participants are questioned privately with no face-to-face interaction with the moderator. This setting may help them feel that they are in a space where they can speak freely and hence reveal information, which otherwise omitted. Videos also represent raw evidence for designers to work with. The disadvantage might occur when participants deviate from the purpose of the conversation and deliver an irrelevant monologue that designers won’t be able to analyze. Also, with loose responses it becomes very complex for designers to compare different responses due to the inconsistency of the content.
“Co-discovery” involves recruiting two participants (friends or acquaintances) and observing their reactions as they co-discover the controls of a new product and verbalize their thoughts naturally in the process. This method often results in more informal and honest ‘verbalisations’ between the two users, which may explain why they found problems in a product. As with the previous method, the moderator has no control over the issues raised during the conversation; and a moderator’s interference might disrupt the natural occurrence and the spontaneity of the subjects’ reaction.
“Focus Groups” are composed of various participants and a leader who directs and prompts topical discussions. Here there is potential in getting many perspectives on aesthetic, functional, etc. components of a product as well as finding solutions for new iterations. However, the leader’s role is crucial here, as the large number of the group may result in having some voices dominating the discussion and preventing others from expressing their opinions and thoughts. The author suggests 5 or 6 participants to ensure equal opportunity.
In “User Workshops”, a group of users engage ‘hands-on’ in the creation process of a new product by imparting usability requirements and creating sketches and design solutions. This includes the user in the design and makes for more meaningful products that respond to user needs, wants, and attitudes, while nonetheless being a time-consuming and challenging performance.
“Thinking Aloud Protocols” serve a way to enter a user’s mind as she uses an interface and formulates her thought process. Tasks can be specified or open to ‘free exploration’. This method may reveal ‘objective performance data’. In this case, the moderator’s role would be to minimize potential distractions when prompting a task or questioning specific user behaviour (i.e. why did you click here and not there? these type of questions could tempt users to change their attitude and responses).
“Incident Diaries” are a form of probe where users note their experience level with the product. This approach can be both effective and incomplete. It is effective as it helps designers measure the different components of a product on a scale of, ‘very easy’ to ‘very difficult’ (Lickert scale), and documents a ‘long-term usage’. However, it represents incomplete data as users might not be faithful to the diary format and may not keep-up with the agenda.
“Feature Checklists” include list of tools, links, and functions comprised in a product which users checkmark as they successfully use them. Here the designer receives factual data rather than user-experience data.
“Logging Use” uses a computer software to track users’ screen/mouse activity and determine the usefulness of a product’s features. This disadvantage here is in the interpretation of collected data. The author writes ” If parts of a product’s functionality have not been used, or have been used very little […], it could be that this aspect of functionality is not useful and so users do not bother with it, […] it is avoided as it is difficult to use, [or] users did not know it existed.” (p.62) He proposes that an interview would be added at the end of the test in order to understand the data which may be misleading design revisions.
The “Field Observation” approach helps designers understand the place of product in a user’s life and the potential interferences that may be present in their habitual environment at the time of product usage. For more effectiveness, it is advised that the moderator be as invisible as possible. Jordan raises the ethical counter-effectiveness of such an approach. Taping users without them knowing could “compromise the level of ecological validity” of a the data (p.63). Testing a product near the end of its design can also be less useful for designers, as changes will be difficult to make.
“Questionnaires”, in brief, demand either ‘fixed’ or ‘open-ended’ responses from users. Fixed responses (quantitative) will be inaccurate at times as users will feel obliged to tick one answer amongst prefixed terminologies which in turn mislead the data analyst, but will provide comparable findings across participants. Open-ended responses (qualitative), on the other hand, will allow users to freely raise issues in their own terms, making the findings more valid and reliable, but becomes more costly for users to complete and may be left untouched.
“Interviews”, as Jordan proposes, can be “unstructured, semi-structured and structured” (p.68). Similar to the “questionnaires” method, this approach is a list of specific concerns the moderator wants answered or broader investigations that help define a user attitude towards a product. Unstructured interviews are open-ended questions with open-ended answers providing insight into what sort of problems were encountered in user-experience. When moderators have somewhat specific knowledge of potential errors or difficulties in their product, the use of a “semi-structured” interview is advisable as it involves both specific questions and open-ended investigations. “Structured” interviews will have quantitative questions responding to ‘requirements capture’. Interviews allow more valid data collection and minimizes misunderstandings.
“Valuation Method” is a way of approximating the value (or cost) of a product by collecting quantitative data.
“Controlled Experiments” refer to testing users in a laboratory, devoid of excess noise and disruptions. The advantage is that the user will focus on the given task; the disadvantage, however, concerns the lack of environmental familiarity. The ‘experimental conditions’ could influence new user-behaviour, and hence create incoherent data.
Non-Empirical Methods:
“Task Analyses” focuses on cognitive fragmentations of task performance. The analysis enumerates the number of steps requires of users to complete a specific task and works with redesigning and minimizing the steps for easier and simpler usage. Jordan mentions GOMS and the Keystroke Model as tactics for analyzing cognition. This method helps collect objective data on one hand, but presents a risk for veering out of the usability (or qualitative) data which might vary from expert to beginner users.
“Property Checklists” consists on analyzing the success of a product’s response to the human factor. Is this product humane? Does it respond to physical and psychological needs? Does it employ coherent language? Are the controls placed at and appropriate and perceptible height and reach? Etc. This method need not to involve participants, rather it follows a set of required specifications and checks if those have been applied in the design process in terms of “consistency, compatibility, and good feedback” (p.75).
“Expert Appraisals” seeks ‘appraisals’ from experts in the field of product usability. Those diagnose the potential obstacles users may be confronted by and provide solutions for fixing those problems. The disadvantage is that participants need not to be present and hence the diagnosis is somewhat inaccurate compared to “task performance data” (p.78).
Finally, “Cognitive Walkthroughs” are breakdowns of the steps involved in completing a specific task. Here, the expert impersonates a typical user and experiences the product. As we know that every user will have idiosyncratic behaviour, this method is valid in speculating a wide range of problems but invalid in specifying empirical problems as it relies on expert performance.