In this entry, we introduce tenets of usability engineering (UE) and user-centered design (UCD), interrelated approaches to ensuring that a map or visualization works for the target use. After a general introduction to these concepts and processes, we then discuss treatment of UE and UCD in research on cartography and geographic visualization. Finally, we present a classification of UE evaluation methods, including a general overview of each category of method and their application to cartographic user research.
Ooms, K. and Skarlatidou, A. (2018). Usability Engineering and Evaluation. The Geographic Information Science & Technology Body of Knowledge (1st Quarter 2018 Edition), John P. Wilson (ed). DOI: 10.22224/gistbok/2018.1.9
evaluation: the assessment of the extent to which a product, mapping or otherwise, supports user needs
conceptual development: the outline of a product’s functional requirements prior to product development, as identified from the work domain analysis
debugging: the process of fixing errors and optimizing code before final release of a product to the end users
discount usability: a cost-effective approach to usability evaluation that prescribes testing with few participants and early prototypes
expert-based methods: methods involving non-user participants with a high level of expertise
field based studies: methods conducted in a real-life setting for the product with real users and use case scenarios
formative studies: exploratory studies conducted early in design aimed to reveal user needs and product requirements
laboratory based studies: methods conducted in a controlled setting to simplify the study protocol and avoid confounding issues with data collection and the testing environment
mixed-methods: the combination of different methods in order to triangulate findings and improve design
prototyping: the generation of partially-functional product designs to gather feedback during the early stages of development
qualitative methods: methods that produce non-numerical, descriptive data
quantitative methods: methods that produce numerical data
summative studies: confirmatory studies conducted at the end of design to evaluate the product’s performance against criteria established earlier in design
target user: the end users that will operate the product
theory based method: methods applied by the product designers that draw on cartography and visualization literature to inform evaluation
usability engineering: a collection of processes and methods to improve the usability of a product
usability: the ease of use of a product, taking into account the target user group and tasks
user-based methods: methods soliciting input and feedback from target users of the product
user-centered design: a multi-step and iterative approach to designing a product that acquires input from users (stakeholders, end users, experts, etc.) throughout design
utility: the usefulness of a product, matching the user goals and tasks to the implemented functionality
work domain analysis: the process of collection, analysis, and synthesis of the target users’ functional requirements
Have you ever worked with an interactive map or visualization where you could not find the function you needed or, where you found the correct function to use, but the results that you got were completely different from those that you expected? These frustrations reflect how well a certain interactive map or visualization is capable of supporting the user in performing specific tasks (e.g., visualizing, analyzing, interpreting, processing data), taking into account the context of use (individual, shared, in an office, in the field, etc.). In this entry, we will first introduce the concepts of usability and usability engineering (UE), with corresponding definitions and application on interactive maps and visualizations. Next a well-established approach that implements UE is further discussed: user-centered design (UCD). We focus on the different stages and its application in research regarding interactive maps and visualizations. Finally, an overview of different methods that are used in UE (and thus also UCD) is presented, including some structures to categorize them.
2.1 Usability and usability engineering
Usability engineering (UE) originated in the field of software development. UE is a toolbox with various principles and methods that can be used in the lifecycle of product and system development, including interactive maps and visualizations. The ultimate goal of UE to the creation of more usable or user friendly products, that are tailored to the actual needs of target end users. IBM in 1981 (Whiteside et al., 1988) and later Apple (Nielsen, 1993) were among the first two companies that established usability laboratories to improve their products.
To understand usability engineering, we need to clarify and explain what the term usability implies. One of the earliest definitions of usability emphasized the importance of effectiveness, learnability, flexibility, and attitude (Shackel, 1986). This definition also acknowledged that usability is contextual, based on variables associated with the users, the use environment, and the user tasks as well as the significance of affective elements, such as satisfaction and likeability that are now integrated into the wider user experience (UX; see UI/UX Design) of geospatial technologies.
Furthermore, Nielsen (1993) defined usability as a quality attribute assessing how easy it is to use interfaces and outlines five usability attributes:
During the same period, Nielsen (1992) and Bellcore argued for a shift towards a more engineering-focused approach to usability, urging for iterative testing and the use of standardized discount usability approaches in product evaluations. Within this context, usability engineering has been described as a set of methods for investigating and improving system and software usability (Nielsen, 1993). In 1998, the International Standards Organization proposed ISO 9241-11 which defines usability as “The effectiveness, efficiency and satisfaction with which specified users achieve specified goals in particular environments” (ISO 1998, p.2). In this definition, effectiveness is the ability of the user to complete a task, efficiency refers to the time and cognitive effort that it is required to complete a task, and satisfaction is used to describe the user response after task completion.
In line with this latter definition, Roth et al. (2015) synthesized these definitions into three tightly-related components that define an interface's success: usability, utility and users. Thus, besides usability, considering Nielsen’s (1993) definition, the authors also focused on utility and the user. Utility refers to ‘the specified goals’ in the ISO definition, thus taking into account what the user’s tasks and goals are with the system. This is inherently linked to the characteristics of the user him/herself and, therefore, it is essential to define a target user group: the ‘specified users’ in the ISO definition.
As an illustration of these concepts, two screenshots of well-known products in the field of cartography and GIS are shown in Figure 1: Esri’s ArcMap and Google Maps. These popular interactive maps represent two different use cases in cartography and GIS. The number of features (utility) implemented in ArcMap is much greater than in Google Maps. Google Maps allows users to view the basemap with some basic interaction tools (zooming, panning, switching to a satellite view, showing pictures, enabling streetview, querying information, etc.). In ArcMap, the user also can add – atop a basemap – multiple (raster and vector) layers and, a large collection of tools is available to process the data: reprojecting the data layers, edit vector data, advanced network analysis, spatial analysis, 3D analyses, etc. The usability of these systems is dependent on the third factor: the characteristics and goals of the target user. A user who just wants to find the shortest route from his holiday park to the nearest shop will not be able to do this efficiently in ArcGIS because of the overload of tools that are offered from which the appropriate one has to be selected. There is thus a much steeper learning curve associated with ArcMap compared to Google Maps (learnability), resulting in a lower usability of ArcMap for this user and task. On the other hand, if a geospatial analyst is asked to calculated the distribution of shop categories and their accessibility at a certain distance around a holiday park, the usability of Google Maps will be much lower because of its lack in utility
Figure 1: Comparing the utility-usability-user trade-off for two well-known interactive maps in cartography and GIS: Esri’s ArcMap (top) and Google Maps (bottom).
2.2 UE in cartography and geovisualization
Usability engineering in cartography and geovisualization can be traced to the 1970s, with preliminary research on the usability of geospatial technologies on human spatial cognition and interaction with maps. The early 1990s saw a growing interest in the topic (see Haklay, 2010), with usability in this context referring to a number of critical components that influence how people interact with geospatial systems that include: the users (e.g. Montello, 2009; van Elzakker & Griffin, 2013); the maps (e.g. Haklay, 2010); and the user-map interaction (e.g. Nivala et al., 2008).
3. UE approaches and applications
3.1 User-centered design
The consideration of human capabilities and other human characteristics in the design of computerized systems (Nickerson, 1969) gave birth to a series of approaches and research practices that address these issues. Having its roots in ergonomics and human factors, user-centered design (UCD) was first established in the 1980s as a philosophy and design methodology placing users at the center of the product development process (Norman & Draper, 1986). The users are engaged with the design in similar ways to participatory research, providing input into the early conceptualization of the product and feedback on each design iteration. Norman (1988) focuses on the importance of understanding the user’s needs, and the elicitation of their requirements to the usability of the design. Therefore, he suggests that in UCD users are involved from the very early stages of product conceptualization and gathering of user requirements to iterative testing and evaluation.
Figure 2. A general overview of the UCD approach (adapted from Ooms, 2016).
Over the years, several approaches or processes have been recommended in cartography and GIScience. These all take the form of a number of iterative stages going from gathering requirements, designing an initial prototype, analyzing and refining the design to a final product (see Figure 2). Slocum et al. (2003) proposed a process which consists of six steps, which was later revised by Robinson et al. (2005) and Roth et al. (2015) to include: (i) work domain analysis (user requirements, needs assessment), (ii) conceptual development, (iii) prototyping, (iv) interaction and usability studies (iterative evaluation), (v) implementation (product development) and (vi) debugging.
The first two steps focus on defining the target users, what tasks the application should support, and which features should be included for these target users: the utility of the system. The importance of these steps - in which the requirements of the systems are gathered - is stressed by van Elzakker and Wealands (2007). After a number of iterative stages over steps (iii) and (iv) above, a fully operational system is implemented, which is again iteratively evaluated to improve its usability.
Nivala et al. (2007) studied the familiarity of map makers with the techniques used in usability engineering and their suitability to evaluate (screen) map designs. They concluded that most map making companies are interested in applying UCD, but that they lack the knowledge on how to implement the approach, including the different evaluation techniques. Nevertheless, UCD increasingly has been applied for the design and development of interactive maps and visualizations supporting cartographic research (see next section). But van Elzakker and Griffin (2013) still stress the importance of involving users during a product’s development, including their requirements and (cognitive) capabilities. In this context, UCD training is essential for cartography and visualization.
3.2 UCD applications in cartographic research
UCD has been applied in a wide variety of cartographic applications: desktop applications, web mapping, virtual environments, collaborative environments, mobile mapping, etc. (see also Haklay, 2010). The focus should not only be on the usability of the cartographic product itself, but on all aspects of Geographic Information Technology applied (usability of GI, databases, methods of data collection, hardware, software, interfaces, etc.). The structure of the final system and thus the type of application that is most appropriate is derived from the requirements analysis (i.e. in what Roth et al., 2005 describe as in steps (i) and (ii)). Below we present an overview topics within this Body of Knowledge for which a UCD approach has been used to enhance the related interactive maps and visualizations: Web Mapping, Geovisualization (forthcoming), Geovisual Analytics, Virtual and Immersive Environments (forthcoming), Mobile Mapping and Responsive Design (forthcoming), Geocollaboration (forthcoming), User Interface and User Experience Design (e.g. Roth 2015; Schobesberger, 2012; Delikostidis, 2011; Skarlatidou & Haklay, 2006).
There is a variety of UE methods, exploited from various disciplines, that can be used at different stages of the system/product design and development (e.g. UCD), which serve different purposes. Several methodological taxonomies have been proposed in the literature that are briefly reviewed in this section.
Besides selecting appropriate UE methods that serve the purposes and aims of the wider methodological framework, it is equally important that the methodological design and the results are reported in a consistent manner. Reporting of the study design and its results in a consistent manner is essential in the light of the transferability, reliability, generalizability, and reproducibility of scientific research (see Cartography & Science). This description should consist of three main elements: information about who the participants are, the materials used (e.g. prototype, cartographic product), and the procedure followed (e.g. methods applied, user tasks). It should be noted that if user testing is one of the methods used, then a critical methodological concern is related to the number of participants that should be recruited to ensure the reliability and objectivity of findings. Such a decision depends on several factors (e.g. whether data will be statistically validated or not) but within discount usability engineering, Nielsen and Landauer (1993) have demonstrated that testing with five participants can yield enough insight into interaction problems; while after the ninth participant, the results become repetitive.
A distinction between qualitative and quantitative methods is perhaps the most commonly used way to refer to methods that are used to respectively collect qualitative (i.e., mainly descriptive data that are mostly in a textual or other non-numerical form) and quantitative (i.e., mainly numerical data) data. In UE, qualitative and quantitative methods are equally important, although it is common for many studies to mix them in order to effectively answer the underlying research questions. Quantitative methods in Human-Computer Interaction (HCI) research have been traditionally used in controlled experiments and hypothesis testing, although they are not limited to only these situations. On the other hand, qualitative methods have recieved increasing attention, as they provide in-depth understanding of how users interact with the technology of interest in particular contexts of use. Popular qualitative methods in UE involve the so-called ‘think aloud’ study and participant observation, while methods such as eye-tracking are mostly used to collect quantitative data. Other methods, such as interviews, enable the collection of both qualitative and quantitative data; for example, questionnaires can be used to collect user demographics (e.g., age, rankings of preferences, frequencies) and Likert questions, but also for collecting open data which describe feelings, opinions, and experiences.
Another popular distinction of HCI methods or study designs is between formative and summative studies, which references the stage of the UCD in which the method takes place. Formative - or exploratory - studies conducted at the beginning of the UCD process in order to define the target users’ profiles and the product’s utility (steps (i) and (ii) of the UCD process in Roth et al., 2015). The focus is on understanding the human-computer interactions with the product, which includes discussing and testing prototypes (step (iii) in the UCD process). Summative - or assessment - studies are conducted in later stages of the UCD process (steps (iv) to (vi) in Roth et al., 2015)) when the structure of the product has been defined. The focus is on quantitative results: statistically comparing user performance between different configurations (interaction device, interface components, etc.) or against benchmarks as a quality assurance.
Another classification in UE focuses not on particular methods, but the context in which the methods are used (e.g. Carpendale, 2008). Thus, laboratory based studies are characterized by their controlled nature, ensuring a high level of reliability and repeatability of the results. Nevertheless, they often lack realism: the product is not used in the proper context. However, when using the cartographic product in a realistic context – field-based studies – there are many (unforeseen) elements that might influence the results (different lighting conditions, noise, smell, interaction with other persons, moving objects, etc.). Consequently, a high ecological validity can thus jeopardize the reliability of the obtained measurements.
The last taxonomy differentiates UE methods between expert-based, user-based, and theory-based methods. Expert-based methods support an evaluation that it is carried out by experts and are used to expose interaction problems (i.e., usability problems) associated with the user interface design. This category includes methods such as guideline review, heuristic evaluation, cognitive walkthrough, and consistency inspection. Of those predictive evaluations, heuristic evaluations and cognitive walkthroughs are some of the most popular (explained in more detail in Table 2). User-based methods involve the recruitment of real users and facilitate an understanding of their difficulties through observation as they interact with the system and as they think aloud. User-based methods do not only help to detect usability deficiencies, but can also support the development of innovative and creative solutions. There are several methods which may support or assume the involvement of real users and the most popular is usability user testing. Roth et al. (2015) also add a third category, namely theory-based methods, such as scenario-based design, consultation of secondary sources, and automated evaluations. Table 2 provides an overview of some of the most popular methods used for cartography and visualization. For a preliminary overview and example of how the methods can be implemented in the cartographic context, refer to Skarlatidou et al. (2010).
UE Method | Literature: Application in cartography and geovisualization | |
---|---|---|
Questionnaire & Survey | Haklay & Zafiri, 2008; Allison et al., 2016 | |
General characteristics:
|
Insights in…
|
Challenges:
|
Eye Tracking | Fabrikant, et al., 2008; Ooms et al., 2012; Popelka & Brychtova, 2013 | |
General characteristics:
|
Insights in…
|
Challenges:
|
Thinking Aloud | van Elzakker, 2004; Flink et al., 2011.; Ooms et al., 2015 | |
General characteristics:
|
Insights in…
|
Challenges:
|
Usability User Testing - Observation | Skarlatidou & Haklay, 2006; Nivala et al., 2008 | |
General characteristics:
|
Insights in…
|
Challenges:
|
Cognitive Walkthrough | Skarlatidou et al., 2010; Savage et al., 2012; Brown et al., 2013 | |
General characteristics:
|
Insights in…
|
Challenges:
|
Heuristic Evaluation | Skarlatidou et al., 2010; Brown et al., 2013 | |
General characteristics:
|
Insights in…
|
Challenges:
|
Focus Group | Monmonier & Gluck, 1994; Harrower et al., 2000 | |
General characteristics:
|
Insights in…
|
Challenges:
|
Interview | Slocum et al., 2004 | |
General characteristics:
|
Insights in…
|
Challenges:
|
It is good practice to combine multiple methods to optimize evaluation realism, reliability, and validity (e.g. Bleish, 2011). Realism - or internal validity - reflects how well real-life situations are implemented in the experiment’s design. Reliability is related to the consistency of the findings and thus the repeatability of the measurements. Obtaining the same results from different methods strengthens the reliability of these findings. Finally, external validity - or generalizability - refers to the applicability of the experiment and its results to other contexts. An overview of how a mixed-methods approach can be applied for UCD of interactive maps and visualizations is provided by Ooms (2016). A first strategy is to combine methods that complement each other in such a way that that their limitations are covered: the limitation or weakness of the first methods is targeted by the second methods in order to be able to measure a broader spectrum of variables and thus derive more solid conclusions. As a second strategy, you can combine methods that measure exactly the same factor, which serves as a measure of the reliability of the experiment and the recorded data. Finally, different factors can be targeted when combining different methods that not necessarily cover each other’s limitations, but provide data from which - when combined - additional insights can be derived. As such new findings are triangulated across the different data sources.