Read Aloud the Text Content
This audio was created by Woord's Text to Speech service by content creators from all around the world.
Text Content or SSML code:
Interaction scenarios describe situations when one or multiple users interact with one or more interfaces of digital computing systems or devices. When interested in one of these scenarios, the ideal way to understand the interactivity is to experience it, or at least to be able to see it in action. Unfortunately, interacting with the system is not always possible. Indeed, the main medium to communicate about research in Human-Computer Interaction remains research papers, that use non-dynamic formats (in particular PDF or static HTML web pages). As “a picture is worth a thousand words”, authors often design and represent static graphical illustrations to explain these interactive scenarios, most of the time associated with captions or legends to form a complete figure [27]. Furthermore, an increasing number of journals recommend adding graphical abstracts [38] summarising the submissions (for example, Elsevier journals suggest including visual abstracts [25]). The figure should give a clear representation of the work described in the paper, summarise the content to help readers quickly gain and understand the main take-home message of the paper, encourage browsing, promote and identify research papers. Illustrations are not only a useful for describing interactive scenarios in research papers, but they are also widely used as effective visual means in presentations or for communicating ideas during meetings. They can take many forms and represent various information that can be photographs, drawings, diagrams depicting concepts, and ideas or charts helping to visualise data. The structure of the illustrations might also vary, using one or multiple frames, being augmented with titles, labels, and even the nature of the captions can be considered as part of the illustration [27]. While ubiquitous and widely used, little is known regarding the different approaches to create such static illustrations and existing works on this topic are limited in terms of domain focus (e.g. design, education) or interaction context (e.g. gestures). In this paper, we unpack a rich taxonomy of styles and techniques that unifies works from the literature, and investigates how the HCI community uses static figures as a way to depict interactive scenarios. We call static figures any visual representations that are used on static forms of media (such as PDF papers) containing an illustration (e.g. drawing, photograph) with additional information (e.g. caption, legend, title). We consider in this work only figures representing interactive contexts, where the interaction can be seen as Tool Use [36], i.e. figures “defining how the user acts with a system and how the system acts with a user”, as they can be particularly complex to design and make use of elaborate illustration techniques to represent specific aspects of the interaction (3D, perspective, dynamic gestures, timing, user's body, etc.). First, we propose a taxonomy of static figures that illustrate tool use, where we categorise figures according to different design elements (conceptual or visual attributes of an illustration used to encode the interaction [64]) they are composed of. We classify the design elements under two main categories, the what design elements (that characterise the concepts represented in the figure) and the how design elements (that characterise the visual properties). Then, we coded design elements for each image of a dataset extracted from the proceedings of four major ACM 2018 conferences in HCI (CHI, UIST, CSCW and Ubicomp) and from a set of patents about interaction designs. Finally, we analyse the coded design elements and corresponding figures to reveal different strategies used to represent interaction. An increasing number of papers in HCI are published every year, resulting in an increasing number of graphical illustrations being produced. Yet, no guidelines or recommendations exist to help researchers create these kind of illustrations. The strategies identified in this paper provide an overview of the figures produced by HCI researchers. It can help early-career researchers working on visual representations of their work, and foster reflection of senior researchers on their own (previously) created figures. This paper makes the following main contributions: A unified taxonomy of design elements that compose figures representing interactive scenarios. Our taxonomy integrates existing taxonomies and synthesises a broad spectrum of techniques and approaches across the many different interactive scenarios illustrated in the HCI community. A set of Structural and Interaction strategies in existing figures used to represent specific aspects of the interaction. Three open-source software tools: one application to facilitate the time-consuming and tedious coding process, and two on-line tools to explore the taxonomy we created and identify strategies. These tools can be used to repeat our methodology either with a different dataset or driven by different goals in order to extract novel strategies. 2 RELATED WORK 2.1 Existing taxonomies Our work builds upon and extends related taxonomies that classify visual design elements, all with different and specific motivations, exploring either the impact of illustrations on learners’ behaviour [27], new product development [68], gesture representations [64] or trace figures [4]. To the best of our knowledge, our work is the first taxonomy focused on the classification and characterisation of illustrations that depict interactive scenarios. In this section we summarise the most relevant visual taxonomies, and their relation to the work presented in this paper. Pei et al. proposed a taxonomy of visual design representations in the context of product development [68]. In particular, they discuss that in the field of product design, the drawing style might be different depending on the life cycle of an illustrated product or the purpose conveyed through the illustration. Using a dataset built by designers and engineers, their taxonomy divides the visual design representations in four main categories: sketches, drawings, models and prototypes. While this taxonomy is focused on specific design representations, we believe their work is of interest to our larger exploration of interactive scenario illustrations, as interactive systems might require different types of representation as well. More specifically, Pei et al. propose a specific sub-category: 2D Visual Design Representations > Drawings > Industrial Design Drawings > Scenario and Storyboard, where the purpose of such illustrations is to suggest user and product interaction and portray its use in the context of artefacts, people and relationships. Another taxonomy can be found in McAweeney et al. ’s work [64]. They first conducted an elicitation study with designers and researchers to understand the processes and tools used to create gesture representations. The elicitation study pointed out that no guidelines existed to assist researchers in designing gesture representations. Then, the authors constituted a dataset from 30 papers published in ACM conferences (CHI, ISS and Ubicomp), including trace figures, photographs, computer graphics, abstract lines, dots and texts. Using information from the elicitation study, they coded the dataset and classified the identified design elements in six dimensions grouped in two main categories: structural and details. Structural dimensions (Perspective, Frame and Colour) are described as necessary dimensions to design any representation while Details dimensions (Body Context, Environmental Context and Gesture Elements) are described as optional dimensions commonly used to extend/enrich the structural representation of the gesture. More recently, Antoine et al. proposed a taxonomy of trace figures, a specific type of illustrations they define as “graphical representations of the most essential features of a scene by using contours/outlines of objects, people and the environment” [4]. They extracted 124 trace figures from the 222 papers published in the 2015-2017 ACM UIST conference proceedings and used two coders to code the dataset. They identified five categories of trace figures (demonstration of gestures, overview of system setup or assembly, interaction sequences, design space illustrations and others) and they extracted eight characteristics depicted (person's body, hands or fingers, devices and objects, screen user interfaces, environment, use of colour, annotations, static vs dynamic and use of perspective). While this taxonomy covers a number of different interactive scenarios used in the HCI community, it only contains a brief description of identified categories and characteristics. Moreover, the work is focused on trace figures only and, as such, excludes figures based on photographs that are likely to be used to illustrate other scenarios (typically too complicated to reproduce as trace figures) and using different strategies (that would not be adapted to trace-based drawings). Finally, while different from figures illustrating interacting scenarios, Fleming's taxonomy [27] relates to our visual taxonomy. He analysed 787 illustrations extracted from 40 textbooks from four subject areas: English, History, Mathematics and Science [27]. The purpose of the study was to observe the impact of illustrations on learners’ behaviour. To do so, he established a taxonomy of instructional illustrations by tagging each illustration with attributes grouped into 11 scales for a total of 107 categories. The scales were as follows: Area, Framing, Shape, Position, Elements, Chroma, Achroma, Encoding Style, Encoding Medium, Information Level and Unification. The scales of this work, and the general discussion of visual representations by Fleming, informed our classification scheme discussed shortly.