People have always had to learn new behaviors in order to operate technology. Basic computer functions like dragging and dropping, for example, or mouse and pointer interaction didn’t just come naturally. Even common gestures like touching, swiping, and pinching require re-mastering and recontextualization amid a landscape of smartphones and tablets. But as technology becomes ever more present in our lives, it’s fair to start asking technology to take a few more cues from us. This idea is central to the work of Advanced Technology & Projects (ATAP), we’re creating a whole new interaction paradigm based on the nuances of human movement—and the promise of a miniature radar chip called Soli.
Soli is capable of reading much more than gestures or explicit interactions, because it can also detect implicit signals, like presence, and body language. The two are related because they both deal with movements of the human body, and when combined allow a framework that’s inspired by the way people normally organize their social interactions in daily life. How to behave in a social context, like at a bar or party, is different than when you’re home alone and relaxing on the couch. Soli, by design, understands what’s happening around the device and therefore has the ability to interpret human intent by moving through three different states: aware, engaged, active.
During our research, we also turned to dance theorist Rudolf von Laban, to better understand the notion of body attitudes. We learned that the way that we hold our body, in terms of posture and movement, reflects inner intention—and by understanding and leveraging each subtle nuance, it’s possible to better determine user intention. For example, similar to entering a room and registering the people around you, Soli mirrors that initial awareness state first understanding what’s happening around the device. If you want to start talking to someone in a crowded room, you’ll first need to get their attention. Soli makes the same distinction, waiting for behavioral cues like a reach or lean toward the phone to engage and anticipate what you’re going to want to do next.
As technology progresses, the ultimate goal is devices that truly understand us, so we don’t need to spend precious moments trying to explain ourselves. This means less time managing the technology in your life and the distractions it brings. Take something as simple as silencing an alarm. As soon as you wake up, you have to reach for the phone, pick it up, bring it to your face and find a little button to press. Once you’re there, you see a push notification with news that you can’t ignore or maybe you hop on Twitter, and soon you’re down the rabbit hole of your digital life before you’ve even gotten out of bed. Imagine a different scenario where you’re far away from your device and interface elements appear larger automatically, shrinking as you approach; your voice assistant provides more information without prompting, because it understands you can’t see the screen. In contrast, new patterns could mean an end to these small but time-consuming microtasks, like switching your phone on and off, that keep you in the digital world longer than you intended. And it would free us up to spend more time connecting with other people, and being present in the physical world.
Eventually, there might even be real-time learning for gestures, so the machine can adapt and relate to you, as a person, specifically. What once felt robotic, will take on new meaning. More importantly, the next generation of Soli could embrace the beauty and diversity of natural human movement, so each of us could create our own way of moving through the world with technology just as we do in everyday life.