Cross-Cutting Challenges Program

Cross-Cutting Challenges Program

Time
(EDT)
Tuesday
July 6
8:30-10:45 Touchless AI Interfaces for Functional and Non-Functional Haptics
10:45-11:15 Break
11:15-13:30 Using the Skin as a Medium of Communication

You can also find the detailed program on X-CD, our virtual conference platform.


Touchless AI Interfaces for Functional and Non-Functional Haptics

Time: July 6, 8:30-10:45 EDT

Organizers: Orestis Georgiou (Ultraleap Ltd.), William Frier (Ultraleap Ltd.), Sriram Subramanian (University College London), Marianna Obrist (University College London), Diego M. Plasencia (University College London), Patrick Haggard (University College London), Kasper Hornbaek (University of Copenhagen), Asier Marzo (Universidad Pública de Navarra), and Mykola Maksymenko (Softserve Inc.)

Description: The Covid-19 pandemic has suppressed our sense of touch more than any other. On the one hand, social touch such as holding hands and hugging has been forbidden, causing our society to resemble that described in the 1990s science fiction movie “Demolition Man”, where physical contact is heavily sanctioned. On the other hand, traditional user interfaces in public spaces like elevator buttons and touchscreen kiosks have been denounced as pathogen spread vectors, causing a surge in antiseptic use and the rapid adoption of alternative touchless interfaces such as voice and gesture input.

Meanwhile, a plethora of touchless tactile interfaces such as focused ultrasonic and infrared mid-air haptic displays, have been developed and explored over the past decade in a variety of settings, including automotive, digital signage, and virtual reality. Underpinning the research and development of touchless interfaces is a truly multidisciplinary effort spanning highly technical aspects of ultrasonics, optics, physical hardware and electronic prototyping, artificial intelligence, software development, immersive technologies (e.g., AR, VR, haptics), human computer interaction, social experience design, applied psychophysics, cognitive neuroscience, and ethics.

The Covid-19 pandemic calls for an overhaul of ultrasound mid-air haptics towards touchless tactile interfaces in public spaces as well as the exploration of the technology’s newly discovered ability to deliver not just functional haptics (e.g., simple haptic notifications and feedback for discriminative touch), but also non-functional haptic information relating to subjective experiences and human cognition (e.g., social and affective touch). Practical and fundamental questions therefore need to be formulated and addressed if we are to develop practical touchless tactile interfaces for use by the general public, while also being able to elicit, enhance, and influence the emotional state of people during interaction. This will be the focus of the proposed Cross-Cutting Challenge (CCC) session.

Topics addressed by the CCC speakers will include: novel touch-mimicking touchless haptic systems for spatial interaction that stimulate multiple (glabrous and non-glabrous) skin types through different receptors; neurocognitive models of haptic-mediated experiences (i.e., agency, bonding, attachment); artificial intelligence (AI) computer systems that can dynamically modulate our virtual social interactions through affective touchless interfaces; ethical considerations around remote, contactless interactions with the human body and the self.

Schedule

▪ Three keynote speakers: 30 minutes presentations
▪ Four lightning speakers: 5 minutes presentations
▪ Panel discussion: 15 minutes


Keynote Speakers

Carey Jewitt

Carey Jewitt
University College London (UCL)

Tile: The sociality of touch

Abstract: Touchless touch technologies re-mediate touch in ways that simulate, stretch, and/or reconfigure touch sensations, resources, and practices. As these and other touch technologies enter the everyday their take-up will be part of shaping what ‘counts’ as touch, who and what we touch, and how we touch. Using a multimodal and multisensory framework and the notion of extended touch which expand what counts as touch, I will discuss the social implications and significance of this shifting tactile technoscape. I will draw on illustrative research examples from the InTouch project to engage with the significance of social and personal histories of touch, materiality and the body, a sense of presence and connection, formulating the ‘right-kind’ of touch and reconfiguring ‘knowing touch’, sociotechnical imaginaries of touch, and ethics. I hope to demonstrate the value of amplifying interdisciplinary dialogues on haptics between social science and computer science, engineering, and HCI and to provoke useful questions for touchless touch.

Biosketch: Carey Jewitt is Professor of Learning and Technology and Director of UCL Knowledge Lab and Chair of UCL’s Collaborative Social Science Domain. Her research interests include researching technology-mediated interaction, the development of multimodal research theory and methods, and innovating research methods across the social sciences and arts. She is PI of the InTouch project (an ERC Consolidator Award) which investigates the social implications of developments in digitally mediated touch, including ‘touch’ in VR, bio-sensing, robotics, iskin, and creative technologies, and digital touch futures. Carey has directed a number of large research projects on methodological innovation, most recently MODE ‘Multimodal Methods for Researching Digital Data and Environments’ (ESRC, MODE.ioe.ac.uk) and MIDAS ‘Methodological Innovation in Digital Arts and Social Sciences’ (ESRC, MIDAS.ioe.ac.uk). She is a founding Editor of the new Multimodality & Society journal (Sage) and journal Visual Communication (Sage). Alongside papers on the sociality of touch (e.g. in Information, Communication & Society, New Media & Society, VIRE), her recent books include Interdisciplinary Insights for Digital Touch Communication (2020) with the InTouch team, Introducing Multimodality (2016) with Bezemer and O’Halloran, and The Sage Handbook of Researching Digital Technologies (2014) with Price and Brown.


Alberto Gallace

Alberto Gallace
University of Milano

Title: The future of touch: understanding the neurocognitive complexity of tactile interactions for promoting technological development

Abstract: Despite the fact that we are not always aware about it, our sense of touch is extremely important for a large number of important functions, from walking to eating, from establishing and maintaining social relationships to defining the boundaries of our body. Touch is one of the first senses to develop and its presence throughout our life, enriches the fullness of our experiences. In a nutshell, touch is what makes us humans. Not surprisingly then, we suffer from profound negative emotional and physical consequences due to the lack of it. Such sensory modality is supported by a complex neural system based on the interactions between several sensory receptors, neural conducting fibers, and brain areas. If one’s aim is to reproduce credible and fulfilling tactile interactions in computer-mediated environments, we need to master such complexity within the boundaries given by current technological limitations. Hacking the mechanisms used by our brain to experience touch is the only pathway to get complete access (and even explore new human possibilities) to the frontier of virtual and augmented interactions.

Biosketch: Dr. Alberto Gallace is a cognitive neuroscientist with a special interest in the study of the cognitive and neural aspects of tactile and bodily information processing. He teaches psychobiology of human behavior and applied neuroscience at University of Milano Bicocca. He is the scientific director of Mind and Behavior Technological Center (MibTec), a research center dedicated to the study of the relationship between VR/AR technologies and human factors. His work investigates multisensory integration, synaesthesia, body representation, pain/pleasure perception in real and virtual/augmented scenarios. His work underpins the very first neurocognitive model of tactile awareness. He is also interested in the application of cognitive neuroscience research to the fields of engineering, technology development, product design, economics and marketing. He is the author of 2 books (one entirely dedicated to the sense of touch: “In Touch with the Future: The sense of touch from Cognitive Neuroscience to Virtual Reality”), more than 20 book chapters, and over 80 scientific articles. He has been invited as speaker at a number of international conferences and his work has been the focus of popular media articles in different countries.


Diego Martinez Plasencia

Diego Martinez Plasencia
University College London (UCL)

Title: Airborne ultrasound: from haptics, to multimodal and towards emotions

Abstract: Although typically more familiar to us for its applications in medical imaging, ultrasound is slowly finding its way as a powerful tool to mediate our interactions with technology or to construct expressive mid-air user interfaces. During this talk, I will provide an overview of the current potential of ultrasound approaches, namely for sensing, parametric audio and mid-air tactile feedback. I will then discuss how recent advances allow us to harness all these approaches simultaneously, creating expressive multimodal interfaces and even volumetric contents in mid-air. This joint methodological framework allows us to think of ultrasound as a means to control kinetic energy in space, with a wide potential in contactless manufacturing, biomedicine and human interaction. Fully exploiting its potential for human interaction also requires us to go beyond ultrasound’s strict physical properties and explore the connections between physical stimulation and higher level neurocognitive responses. In the second time of my talk, I will focus on how we will approach this challenge within the TOUCHLESS project, by combining mid-air haptics, neurocognitive models of touch and predictive AI engines.

Biosketch: Diego Martinez Plasencia was a Lecturer of Interactive 3D graphics at the School of Informatics at the University of Sussex. His research ambition is to create multi-modal interactive systems that allow users to see, hear and feel virtual 3D content in a seamless manner, without any attachments or additional devices (e.g. glasses, gloves). His research involves a combination of 3D display approaches, HCI and applied physics to enable interactive systems, such as multi-view tabletop systems or multi-modal particle based displays (PBDs). His work has been supported by FET and EPSRC and he is currently leading a UK-China Research-Industry Partnership aimed at advancing PBD systems and showcasing them in permanent, large-scale public exhibitions. His work has been demonstrated at international forums, such as Festival della Sciencia or Founders Forum, and received extensive media coverage in ITV, CNN, Discovery Channel, BBC Click or Sky News. Before joining Sussex, he was a research associate at University of Bristol and assistant lecturer at UCLM.


Lightening Speakers

Patrick Haggard

Patrick Haggard
University College London (UCL)

Title: The sense of agency and the somatosensory system

Abstract: The sense of agency is the feeling of controlling one’s actions at will, and, through them, making things happen in our immediate environment.  Every time we switch on a light, or type a character on a keyboard, or interact with a tool, we experience agency over what we do, like a constant backdrop of normal mental life.  The most immediate effect of willing an action is the physical movement of the body.  Many motor control models argue that the core computation underlying sense of agency is the predictability of such immediate somatosensory effects of one’s own voluntary actions.  This talk will consider the special relation between the sense of agency and somatosensory signals.

Biosketch: Patrick Haggard leads the Action and Body Research group at the Institute of Cognitive Neuroscience, University College London.  He joined UCL in 1995.  He has published over 400 peer-reviewed papers, in the general area of human cognitive neuroscience, and often in interdisciplinary collaborations ranging from engineering and law through to neurology and philosophy.  His main research focuses on human voluntary action, and somatic sensory experience. In 2014, Haggard was elected a Fellow of the British Academy. He was awarded the CNRS Jean Nicod Prize in 2016, and became a Fellow of the Max Planck Society in 2020.


Mykola Maksymenko

Mykola Maksymenko
SoftServe

Title: Biosensing and touch for experience transfer at a distance

Abstract: The ubiquitousness of virtual communication radically reduced our ability to communicate peculiarities of the emotional color and personal experiences. Limited to text and audio-visual digital channels the large variety of communication tools can still hardly transmit tiny features of body language, respiration, and other biometric features that supplement facial and language expressions and are seamlessly perceived in face to face interactions.
I will outline how an affective component could be reintroduced and enhanced by biometric channels and touch interfaces in remote communication.

Biosketch: Mykola Maksymenko is the Research and Development Director at SoftServe Inc., where he drives technological development and research in applied science and AI, human–computer interaction, and sensing. Mykola holds a Ph.D. in Theoretical Physics and his academic research experience includes a number of graduate internships at the Universities of Magdeburg and Goettingen, funded by german funding bodies such as DAAD and DPG, with further postdoctoral research at the Max Planck Institute for the Physics of Complex Systems (Germany) and Weizmann Institute of Science (Israel). His HCI research at SoftServe combines ambient intelligence and the search for hidden features in biometric and vision signals to design new affective computing, wellness, and digital signage interfaces.


Antti Sand

Antti Sand
University of Tampere

Title: Hygienic interfaces with touchless tactile interaction on permeable mid-air displays

Abstract: The sense of touch is important to human interaction, as well as for interacting with our surroundings. Touch and tactile feedback are essential also when interacting with digital technology. Tactile surfaces may, however, transfer bacteria and viruses, especially on public interfaces. Mid-air haptics together with permeable screens formed of flowing light-scattering particles offers one solution for hygienic touchless tactile public interfaces. Gestural interaction from afar can suffer from the Midas touch issue, where the system incorrectly interprets extra gestures as selections. Permeable screens provide the users with a visual reference for the expected touch distance, possibly making selections easier. Paired with ultrasonic mid-air haptics, the users could experience a hygienic and unbreakable touchless touchscreen.

Biosketch: Antti Sand is a Lecturer at the Faculty of Information Technology and Communication Sciences at the Tampere University in Finland. He received his masters degree in computer science from the Tampere University in 2013 and defended his doctoral thesis in February, 2021. His research has included adding touchless haptic feedback to interaction on permeable mid-air displays.


Thomas Howard

Thomas Howard
IRISA CNRS

Title:  Bringing contactless tactile interaction to immersive virtual reality

Abstract: It has been repeatedly shown that haptic feedback in VR and AR interactions increases performance, perceived realism, immersion, and presence. Contactless haptics is promising for tactile feedback in VR and AR, as it does not require users to be tethered to, hold, or wear any device. This is less cumbersome, easy to set up, simplifies tracking, and leaves the hands free for concurrent interactions. These are all positive factors in terms of technology adoption by users as well as simplicity of integration into the VR and AR ecosystems. However, even the most mature contactless haptic technology – focused ultrasound arrays – is still somewhat in its infancy, suffering from technical limitations and leaving many unknowns when it comes to rendering capabilities, optimal approaches for rendering, as well as perception of stimuli. In this talk, I intend to provide a brief overview of the potential of contactless haptics for improving immersive VR interaction, as well as some of the work on integrating focused ultrasound arrays in VR, rendering, and perception conducted in our team at IRISA (Rennes).

Biosketch: Thomas Howard is a post-doctoral fellow with CNRS at IRISA in Rennes, France. He received his masters degree in mechanical engineering from the Arts et Métiers ParisTech and the Karlsruher Institut für Technologie (KIT) in 2012, and defended Ph.D. in robotics at Université Pierre et Marie Curie (UPMC) in Paris in 2016. Supervised by Pr. Jérôme Szewczyk, his work focused on evaluating performance improvements in minimally invasive surgery when using tactile feedback provided through the handles surgical tools. After a two year break to travel the world, he returned to IRISA to work in the context of the FET-Open H2020 project “H-Reality” on the subject of mixed contact and contactless haptic interfaces for interaction with immersive digital environments. His research focuses on how to combine different haptic feedback technologies, stimuli and interaction paradigms to provide richer haptics in immersive virtual reality. With an approach heavily relying on human subject studies, he approaches this challenge by assessing the performance, perceptual and behavioral impacts of combined haptic systems designs, from the mechatronics to the design of rendering and software interaction techniques.


Using the Skin as a Medium of Communication

Time: July 6, 11:15 – 13:30 EDT

Organizers: Lynette A. Jones (MIT), Hong Z. Tan (Purdue University) and Charlotte M. Reed (MIT)

Description: Tactile communication systems such as Tadoma and braille demonstrate that language can be conveyed effectively through the skin. In addition to these systems that were developed for individuals with hearing and/or visual impairments, over the years there have been a number of attempts to develop tactile vocabularies and displays for general use. Two major challenges in creating these systems have been to determine the most effective unit of communication (i.e., character, phoneme, word, concept, tacton) and the optimal strategies for training people in their use. Over the past decade with the growth in haptic technologies and devices we see a resurgence of interest in developing tactile language communication systems that are easy to learn and retain. This is in part driven by advances in wearable technologies and the need to offload the overworked visual and auditory systems. Fundamental questions remain to be answered in developing these tactile communication systems and these will be the focus of the proposed Cross-Cutting Challenge session. The CCC session will cover the spectrum of issues related to using the skin as a medium of communication. Topics addressed by the speakers will include: how should language be encoded on the skin, should such encoding be the same for speech and text, how does the location of the display influence the parameters available for use in communication, do multisensory cutaneous displays enhance learning and lead to better outcomes, what are the challenges in creating refreshable braille displays and other display technology for individuals who are visually impaired? Speakers from both within and outside the haptics community will provide an overview of previous work as well to present talks focusing on ongoing research on this topic. Different approaches toward conveying language through the skin will be covered, including systems designed to convey textual representations of written English and those that are based on speech communication.


Speakers

Roger Cholewiak

Roger Cholewiak
Emeritus, Princeton University

Title: Historical Perspectives on Cutaneous Communication… From a Wide Mantel

Abstract: Frank Geldard traced attempts to provide for tactile communication of speech back to the 7th century; we generally refer our current technological evolution to Gault’s work in 1926. Gault’s attempts to present speech directly as vibration on the skin predated work demonstrating that the skin’s ability to resolve mechanical vibrations was almost two orders of magnitude narrower than that of the ear. Understanding the skin’s limited vibrotactile bandwidth led to the need to devise other coding schemes if the cutaneous senses were to be used as a substitute modality to appreciate speech. These have included two-dimensional spectral displays of the speech waveform, pictorial representations of the vocal tract during speech, and vibrotactile presentations of just F0, using vibrotactile stimuli within the skin’s limited dynamic range, to name a few. Some of these were appropriate for sighted individuals who are deaf; others, like Tadoma (using the hand to feel vibrations, movement, and air flow in the face of the speaker), could provide speech reading for persons who were deaf and blind. Displays using vibrotactile stimuli had been limited by available devices, but improvements in technology over the past several decades have provided for dramatic new approaches. This history and evolution will be discussed with particular reference to the work of colleagues at the Cutaneous Communication Laboratory at Princeton.

Biosketch: Dr. Cholewiak arrived at Princeton University from the University of Virginia as a Research Associate in 1974 and became the Director of the laboratory in 1991. His research interests have primarily involved the variables underlying how the skin processes vibratory patterns presented with vibrotactile matrix systems in basic and applied situations. The majority of his work has employed arrays of mechanical contactors of his own design fitted to different body sites. He is internationally known in the area of tactile transducers, co-authoring a book chapter on vibrotactile transducers, and has consulted in the successful development of new tactor systems.  These have included the Sensor Electronics (Medford NJ) MTAC, an ultra dense vibrotactile array, Angel Medical’s (Tinton Falls NJ) Guardian, a cardiac monitor with an implanted vibrotactile alarm, as well as the Tactile Situation Awareness System (TSAS) at the Naval Aerospace Medical Research Labs (NAMRL – Pensacola FL). He has authored six book chapters and encyclopedia articles on tactile sensitivity as well as over four dozen journal articles and presentations at national and international professional meetings. He has been a consultant with government institutions, including the National Institutes of Health and the National Academy of Science, educational institutions, the military, and commercial research laboratories, and is a peer reviewer for numerous professional journals, such as Perception & Psychophysics and Brain Research.


Charlotte M. Reed

Charlotte M. Reed
Massachusetts Institute of Technology (MIT)

Title: Speech Communication through the Sense of Touch

Abstract: Despite a long history of research, the development of a stand-alone tactile aid for speech communication has remained an elusive goal. The current talk will report on recent work conducted with a new tactile speech device based on the presentation of phonemic-based tactile codes for the 39 consonants and vowels of English. The strategy used to map phonemes to tactile codes will be described and results will be presented for performance on identification of the individual tactile codes as well as words composed of strings of tactile phonemes. On the basis of the relatively short training times and high levels of accuracy achieved in these studies, the results demonstrate the feasibility of this approach for communication of speech through the tactile sense.

Biosketch: Dr. Reed is a Senior Research Scientist in the Research Laboratory of Electronics (RLE) at the Massachusetts Institute of Technology (MIT). She received a B.S. in Education from Carlow College in 1969 and a Ph.D. in Bioacoustics from the University of Pittsburgh in 1973. She joined RLE as a postdoctoral fellow in 1975, and was promoted to Principal Research Scientist in 1989 and Senior Research Scientist in 2003. Dr. Reed’s research is concerned with the development of improved communication aids for persons with hearing impairment and deafness. In the auditory area, her research has been directed towards improved speech reception for users of hearing aids. This research includes work on frequency-lowered speech as well as research which makes use of simulations of hearing impairment to understand the role of audibility in the performance of hearing-impaired listeners, particularly on the difficult task of understanding speech in backgrounds of noise.. Dr. Reed has also worked in the area of tactual communication of speech, including the development and evaluation of tactual aids which can substitute for hearing in persons with profound deafness. Her research in this area began with studies of natural methods of tactual communication used by members of the deaf-blind community, including the Tadoma method as well as the tactual reception of fingerspelling and sign language. Inspired by the ability of deaf-blind persons to communicate through the tactile sense alone, Dr. Reed’s research was extended to include study of a variety of synthetic tactile devices for speech communication, with the goal of achieving the level of performance observed with Tadoma. Dr. Reed’s current research in the area of tactile speech communication is concerned with the development and evaluation of a phonemic-based tactile display, conducted in collaboration with Dr.  Hong Z. Tan of Purdue University.  Dr. Reed has also investigated interactions between the senses of hearing and touch in persons with normal hearing and sensorineural hearing loss and how information is combined across the two senses. Because such multisensory interactions are an important aspect of human communication, disruptions in the ability to process information from two or more senses simultaneously may contribute to a variety of human communication disorders.


Lori Holt

Lori Holt
Carnegie Mellon University (CMU)

Title: Incidental Auditory Category Learning and Its Implications for Language Acquisition

Abstract: Learning often takes place without the support of instruction, explicit training, or overt feedback. Statistical learning, the process of becoming sensitive to statistical structure in the environment, is an influential example. Across species and development, organisms learn statistical regularities passively experienced over spoken syllables, visual shapes, tactile input, nonlinguistic tones, and even semantic categories without the benefit of explicit feedback, instruction, directed attention to the stimuli, or even an overt task. Yet, learning via passive accumulation of regularities fails under some circumstances. Troublingly, these circumstances mimic some of the complexities of learning in the natural world, such as those presented by substantial acoustic variability across talkers or continuous, fluent speech input. The core unanswered question, then, is how learning statistically-structured input proceeds when passive exposure is insufficient to drive learning and yet there is no explicit instruction or overt feedback. I will describe a program of research that examines the intermediate ground between passive exposure and instruction. This work examines whether active engagement in a multimodal perceptual environment ostensibly unrelated to learning supports statistical learning by virtue of temporal alignment of statistically-structured input with behaviorally-relevant actions, objects, and events. In this context, I will describe studies of adults learning speech and nonspeech auditory categories and present a candidate neurobiological network to support this incidental statistical learning with implications for language acquisition.

Biosketch: Prof. Holt is a Professor of Psychology at Carnegie Mellon University. Holt received a B.S. in psychology from the University of Wisconsin–Madison in 1995 and a Ph.D. in cognitive psychology with a minor in neurophysiology from UW–Madison in 1999. She has been employed at Carnegie Mellon University and has been a member of the Center for the Neural Basis of Cognition ever since. Research in her lab focuses on the cognitive processes that underlie this feat, using speech processing as a platform for investigating learning, plasticity, categorization, cross-modal processing, object recognition, memory, attention and development. Prof. Holt is investigating the learning that occurs in acquiring the sounds of a second language and how representations of the native language interact with this learning; how listeners “tune” their auditory perception to the statistical regularities of the sound environment; and how higher-level knowledge may influence early auditory object recognition and speech categorization. The major approach her group uses is to study human adult (and sometimes child) participants using perception and learning tasks. In addition, they make use of EEG and fMRI to address the neural bases of auditory processing.


Granit Luzhnica

Granit Luzhnica
Graz University of Technology

Title: Optimizing Patterns and Encoding for Vibrotactile Skin Reading

Abstract: Vibrotactile skin reading uses wearable vibrotactile displays to convey dynamically generated textual information. Such wearable displays have the potential to be used in a broad range of applications, especially for visual and auditory impaired users. In skin reading, unique vibrotactile patterns encode letters of the Alphabet which then are combined to construct words and sentences. The representation of information is a challenge as it should be optimized for both perception and throughput. I will discuss our approach of leveraging spatial acuity and utilizing optimization techniques for generating efficient vibrotactile patterns and encoding schemes. Furthermore, I will discuss the remaining challenges and potential applications of skin reading.

Biosketch: Dr. Luzhnica is a postdoctoral researcher at the Graz University of Technology. His research focuses on wearable human-computer interaction, in particular the use of haptic feedback to enable computer-to-human communication and on using wearable sensors and machine learning to enable human-to-computer communication. Granit finished his bachelor’s degree at the University of Prishtina (Kosovo) and his masters and PhD at Graz University of Technology (Austria). During his PhD, Granit researched methods for communicating vibrotactile encoded messages through wearable vibrotactile displays, and was heavily focused on finding optimized stimulation methods and encoding for vibrotactile messages.


Sile O’Modhrain

Sile O’Modhrain
University of Michigan

Title: Should the Braille Font be Refreshed in the Age of Refreshable Braille?

Abstract: Braille is a fixed-width font, meaning that, regardless of any other typesetting considerations, each braille character will occupy the same amount of space.  The exact size of hard-copy braille characters varies slightly between major braille-producing countries.  In the US, the size of braille characters is set out by the National Library Service in Specification 800, “Braille Books and Pamphlets,”, which provides standards for dot height, dot diameter, inter-dot spacing, inter-cell and inter-line spacing.  These standards were developed to maximize the discriminability of dots within a braille cell while at the same time making cells compact enough to fit easily under even small fingertips.  But these standards do not apply to braille signage, and they are only considered as guidelines for manufacturers of refreshable displays.  As we move toward large-format displays capable of rendering tactile images, we increasingly encounter a trade off between retaining standard dimensions for literary braille and providing a surface with equally-spaced elements for rendering tactile images.  Furthermore, refreshable displays offer new possibilities such as rendering braille at different heights and using dynamic properties such as blinking dots to attract attention.

In this talk, I will discuss how, in the age of refreshable braille, we might usefully rethink braille standards so that they can continue to address the nature of braille as a tactile language that is evolving as new techniques and technologies for rendering braille are developed.

Biosketch: Prof. O’Modhrain is a professor in Performing Arts Technology at the school of Music, Theatre and Dance at the University of Michigan. Her research focuses on human-computer interaction, especially interfaces incorporating haptic and auditory feedback. She earned her master’s degree in music technology from the University of York and her PhD from Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA). She has also worked as a sound engineer and producer for BBC Network Radio. In 1994, she received a Fulbright scholarship, and went to Stanford to develop a prototype haptic interface augmenting graphical user interfaces for blind computer users.

For questions, please contact ccc@2021.worldhaptics.org.