Wednesday, July 8, 2015

Are Gesture Based Inputs the Future of HMI and Embedded Machines?

The human hand is a wonder of anatomy. It is capable of precise and fine control, with pressure and texture sensitivity unrivaled by the capabilities of modern robotics. That’s why touch-based interfaces are growing increasingly popular as technology races to capture and quantify the information gathered by gesture, pressure, and touch.

Google Soli is a project announced at the 2015 Google I/O conference. Project Soli aims to incorporate the ability to capture gesture information and incorporate hand motions and signals into user interfaces for devices. While vision systems have already begun to make gesture-based interfaces possible,
Project Soli incorporates a new method for collecting data – radar.
Because Soli’s radar capabilities use sound frequencies, there are unique advantages for small embedded devices like smartphones and wearables. Radar can be embedded into a single chip, with no moving parts or cameras. It can detect motion through materials like plastics or glass.


Where We Are Now
Hardware designers have speculated for years that the current multi-touch technology available is a middle ground between the previous mouse and keyboard input devices to future gesture-based interfaces. Multi-touch is a vital technology. It allows us to remove peripheral input devices and interact directly with our machines through the screen. As this technology is adopted at breakneck speed, it’s easier to guess what might be available in the next generation of machine interface designs.

Muti-touch interfaces have also taught us how quickly we can adapt to new input methods. In the span of a decade, we’ve all learned to navigate screens through virtual buttons, swiping, pinching, and panning gestures. These routine gestures are already appearing in our industrial machines, where direct interaction with the application keeps attention focused on the information on the screen, improves operator safety, and makes upgrading or changing systems much easier.

Another important difference between multi-touch and gesture input is that with Multi-touch, gestures and feedback capabilities are limited to two dimensions. We can only interact with points on a flat screen, and our feedback is limited to the pressure of our fingers, or the vibrations of haptic feedback if it is employed.

What Better Gesture Recognition Might Mean for Interface Design
The aim of Project Soli is not only to quantify and interpret human gestures, but to refine them enough to make use of gestures without receiving interference from the environment. This is especially critical if gesture-based interfaces are to move beyond consumer electronics into the industrial world.

Gesture-based controls may well change how we interact with machines, and it will only be a matter of time until they’re used in HMI and SCADA interfaces in the future. One advantage that gesture-based input has over current multi-touch input is that we can opt to remove the screen. If a gesture can start or stop a machine, rotate a dial, or trigger an OEE dashboard on a large overhead screen, then small screens on embedded HMIs are no longer necessary. The components of an embedded machine can shrink even further and will require much less computing power.

Gesture input may also improve operator safety further. If the operator’s hands aren’t near moving parts, there is a greatly reduced chance of onsite accidents. In addition, if operators are able to manipulate robotic parts with precise hand control, it will enable machines to function in otherwise dangerous environments, or work with materials in a sterile lab environment.

One way feedback is being implemented into 3D gesture technology is through the use of lasers that are able to create a sense of resistance in the form of a gritty or tingling sensation on the skin when the hand encounters virtual objects. We may see this incorporated into future iterations of gesture based controls.


It will be interesting to see where developments in user interfaces like Project Soli take us. Devices like Leap Motion, which relies on cameras and sensors, are already finding a place in the market. While we don’t yet know whether vision system technology or radar will prove to be the most practical solution in the end, there’s little doubt that three dimensional gesture based input will be coming to our interfaces in the near future. 

No comments:

Post a Comment