Visualising Haptics
Non-visual forms of data communication is an exciting new research topic that is driven by the need for accessible data representations and humans proven capacity to understand information better (or differently) through multi-sensory modalities. Haptic visualisation therefore represents the realisation of some qualitative or quantitative data information through kinaesthetic and/or cutaneous stimulation [1].
In 2014, Long et. al. published a seminal paper on how to create three-dimensional haptic shapes in mid-air using focused ultrasound [2]. A lot of progress has been made since then at Ultrahaptics, both on the hardware and software fronts, the majority of which forms the company’s core technology and IP. The 2014 paper entitled “Rendering Volumetric Haptic Shapes in Mid-Air using Ultrasound” however captures the essence of much of our continued R&D efforts and therefore is something I’ve been wanting to write about for some time now.
Let’s start with the journal of choice. The paper was published in the ACM Transactions on Graphics which to me initially came as a surprise but later accepted based on the following argument: We usually think of graphics as computer-generated 2D/3D still or animated images and therefore is quite disconnected from the field of haptics and the sense of touch. For a blind person however, graphics are haptics. The connection therefore does exist and so one simply needs to think of ultrasonic haptics as computer generated 2D/3D still or animated images that are sensed by the skin rather than the eyes.
In visual graphics, the blending of RGB colours, the contrast and dynamic range, the method for rendering of polygonal meshes, the resolution and the animation frame rate are some of the fundamentals related to the image quality. Similar fundamentals exist in analogy to Ultrahaptics graphics, most of which were first studied in this seminal 2014 paper:
Novel algorithms were introduced that animate 2D geometrical shapes such that as a result 3D objects are felt.
Efficient algorithms were discussed that unlocked the possibility of rendering multiple focus points in real time and at predictable frame-rates.
Optimisation algorithms were introduced that can boost hardware performance as to achieve increased strength of the mid-air tactile sensations.
Simulation software was used to visualise the results, and user studies were conducted to verify them.
New hardware architectures were designed and built to implement the above algorithms.
All these investigations and many more continue to occur at Ultrahaptics today, where we try to uncover and exploit the various trade-offs between quality, performance, and costs whilst remaining within the realms of physics and perception. Moreover, we strongly subscribe to the open innovation model and collaborate with multiple Universities and Research Labs in order to facilitate knowledge transfer and accelerate innovation in haptics, HCI, and focused ultrasound.
text
If you are interested in ultrasonic mid-air haptics, I would strongly encourage you to read Long’s paper and to join our Academic Program. Moreover, I’m always happy to discuss new research directions and the use of Ultrahaptics development kits so please get in touch or book a Skype call with me here.
[1] Culbertson, Heather, Samuel B. Schorr, and Allison M. Okamura. "Haptics: The Present and Future of Artificial Touch Sensation." Annual Review of Control, Robotics, and Autonomous Systems 1 (2018): 385-409.
[2] Long, Benjamin, Sue Ann Seah, Tom Carter, and Sriram Subramanian. "Rendering volumetric haptic shapes in mid-air using ultrasound." ACM Transactions on Graphics (TOG) 33.6 (2014): 181.