BBS Leadership Lectures | Digitizing Touch | Lihi Zelnik-Manor

7 March 2023

This was the third appointment with the Leadership Lectures, the series of thematic meetings dedicated to key aspects of contemporary business and addressed to the BBS Community.

The latest welcome guest at Villa Guastavillani was Lihi Zelnik-Manor, Professor of Electrical and Computer Engineering at the Technion Israel Institute of Technology. Introducing her was BBS colleague and lecturer Alon Wolf, Technion’s vice president for external relations and resource development since 2019. 

Lihi Zelnik-Manor is a robotics expert whose career in artificial intelligences began in the 1990s when, as a mechanical engineer, she found herself realizing that, “Okay, robots are beautiful, but it’s not just about mechanical engineering. They have to have a brain,” and wanting to find out “how this robotic brain works.” To do so, she earned a PhD in Computer Vision at the Weizmann Institute and has since followed an academic career working primarily in this area. Computer Vision is an interdisciplinary field of research that aims to develop algorithms and techniques that can enable computers to reproduce visual functions and processes specific to the human eye. The goal is not only to gain the ability to recognize objects, people, or animals within a single image or in a sequence of images (such as a video), but to extract useful information for processing these elements, progressively reaching higher and higher levels of abstraction and understanding. In other words, it is the ability to reconstruct a context around the image and give it authentic meaning. In order to operate accurately, Artificial Vision systems need to be trained with a massive amount of images that, properly labeled, will form the essential dataset for the algorithm to learn intelligently. The field of study is thus the broader field of artificial intelligence and the so-called “machine learning.”

But this was only the starting point for Zelnik-Manor. Indeed, as an academic, she believes that studies in this field should precede industrial applications and always look ahead, projecting the most important issues of today onto the long term and imagining their developments 20 or even 70 years from now. And it was this desire to look ahead that led her to take up haptics, that is, the topic that gives the Leadership Lecture its title: digitizing touch. To address this topic, an introduction is needed to place the study of haptic perception by machines in the broader context of A.I. studies beginning in the 1950s. While on the other senses we are further along, as Computer Vision shows, Zelnik-Manor explained, with haptics we have more questions than answers. But in order to understand better, it is necessary to start with the definition of the fifth sense: “Think of being able to touch with your hands anything you can imagine touching. That is haptics. When we talk about the digitization of touch, the example I like best is that of the glass of water. If I want to grab a glass of water, there are two anatomical mechanisms involved and they work separately, although they are somewhat synchronized. One mechanism is the kinesthetic one, given by the muscles and joints, while the other is in the skin. When I move my arm, I can feel the movement, I can close my eyes and feel my arm going toward the glass and my fingers closing. But to pick up this glass, I have to touch it, and the skin closes the loop. The fingers touch the glass, and there are sensors in our skin that give the brain feedback through electrical signals.” There are mechanical receptors in our skin that are sensitive to three types of forces: static pressure vibrations, high and low frequency vibrations, and friction. These are the three types of forces we can feel with our skin. We can also feel heat and pain. The skin’s mechanical receptors are distributed throughout the body, but not with the same density: where there are fewer sensors, it’s more difficult to find the information. 

But why is it important to digitize this process? “The starting point is that today we can see and hear digitally,” Zelnik-Manor explained, “I can slip on a pair of VR glasses and see and hear digital objects, I can take a video and have a digital version of what was filmed, but I can’t touch it.” The applications of such a possibility would no doubt be interesting. The first area envisioned by the Israeli scientist is the medical field, with several applications: “First of all, remote medicine. Let’s imagine, although it will take many, many years, being able to treat someone for example suffering from Covid, without real physical contact. We can also imagine being able to train doctors on a large scale by allowing them to train, for example, in cardiac surgery by having the ability to touch a virtual heart. Then, for remote medicine, we can think not only of events in two different places, but also of instruments already used today, such as laparoscopy, that could be enriched with the haptic dimension, amplifying their potential: instruments, in fact, today do not provide haptic feedback and this is a limitation for the surgeon.” But applications in the medical field do not end there, and one only has to think of the world of rehabilitation to imagine a scenario in which people suffering from sensory deficits could rely on a digitized process to rehabilitate and recover their perceptions. Of course, it is not only about applications in the medical field, but also in decidedly more fun contexts, among which Zelnik-Manor mentions the artistic work of painters and sculptors, who could make digital works by touching and shaping them, but also the possibility of touching loved ones or a pet when one cannot be physically close. 

The state of the art, if we look at the devices that are currently present, does not seem to offer particularly fascinating prospects: these are objects present in industry, but on a very small scale, such as gloves for interacting in the virtual world or tools for heavy mechanics, but they only provide kinesthetic feedback. This means that although I can feel the force against the object, I cannot “close the loop” with the skin and there is still no way to stop us from going through virtual objects. “Some companies like Canvas and Immersion,” Zelnik-Manor explained, “are developing technologies that, when you touch a screen, create friction. You can feel that you are touching something and it is hard to move, which is great for cursors. For example, imagine that you are driving a car and you want to increase the volume of the radio: by moving your finger on the screen you will have the feeling that you are moving something, so there will be no need to look away from the road to check it through your eyesight. Even more interesting is that of Ultra Lip: a tablet with many ultrasonic speakers, all aligned in such a way that the ultrasonic waves they create are all directed toward the same point, so that when you put your hand on top of this tablet it is as if forced and you have the sensation of sensing objects in mid-air. It is the stuff of science fiction, but it is still very basic.” Then there are tools that immediately project into a really fantastic future, such as the Tesla Suit (but Elon Musk is not involved this time): a suit that allows for full-body feedback. Very expensive and still a work in progress, it is, however, a very interesting product. Meta, for its part, is working with another company called Haptics to go beyond just kinesthesia and give the skin a feeling of texture, but again the technology is still far from ready. What about in the research labs? Technologies are being studied that are very interesting, but their approach is still far from being applied on products. For example, systems are being studied that are like a myriad of pins that hold the shape on which they are placed that can touch the skin and feel the texture. They start with existing technologies and study innovative evolutions and applications, and it is the most fruitful approach to achieve useful results on a large scale.

Zelnik-Manor shared with the BBS Community the kind of research being conducted at the Technion. It surprises and fascinates to think how, starting with a reverse engineering approach and relatively simple and inexpensive materials, it was possible to design a real virtual experience that also involves touch. Friction, vibration and pressure were simulated in the virtual world thanks to a mechanical tool with a mode of use very similar to that of a computer mouse, which allowed the fingers to experience all three sensations through the elements built into it. “Each approach has its advantages,” Zelnik-Manor explained, “We decided to go with an inexpensive solution, and we knew it would not be a good device: you can’t start and get a good solution the first time you do something, it’s impossible. So, we started with a simple solution that works and now we want to better understand how people perceive. We don’t know much about the sense of touch. Our knowledge about it is limited, as it is about our perception and what might work, but we are learning and already working on the next generation.” Here then is where the task of academia becomes complementary and parallel to that of industry, which always and necessarily aims for a product-based application: “Build, experiment, learn. And then rebuild, experiment and so on, until we eventually come to a good device, something that works.”

This, then, is the state of the art. But so many challenges remain and solving them requires integrating these kinds of studies with those on artificial intelligence and machine learning, mentioned at the beginning. What is the next step? Zelnik-Manor explained it in a few words, except for then describing in detail and with examples that seemed inspired by a science fiction movie the possible applications. “A better device, better algorithms. But we also need to talk about how we activate the device itself. Suppose we have the best one around, capable of creating any desired texture and providing precise feedback. How do we activate it? So, there is another part, and that is where my experience in A.I. should come in.” What is needed to get there is what we call, not surprisingly, the “new oil,” which is data. But while it is possible to take a picture and ask A.I. to detect the visual characteristics that determine the nature of objects, it is not yet possible to have cameras that can detect the tactile properties of objects. That’s when collecting data with a system such as Computer Vision is as important as it is complex. “Millions of people have contributed images uploaded online to create the database to draw from, thus helping to educate the A.I., but how to do haptic data collection on a large scale? It is not yet possible. Of course, it is an opportunity, and there are already studies in this regard: “We are working on solutions, we are not there yet though,” Zelnik-Manor said, “but there is a whole world of new challenges that are coming with this technology.” Challenges, including ethical and social ones, which Zelnik-Manor believes need to be anticipated and addressed in advance, still without losing the opportunity to improve people’s lives: “I think we can do some interesting things already within five years. Definitely. It’s a great area for research, exploration and reflection,” she concluded. She was then asked many questions from the audience, which had followed with interest and passion a Lecture dedicated to a topic that is as complex as it is fascinating. 



Apply

Back To Top