From idea to product.
We built Soli from the ground up from an impossible idea: to create a radar small enough to fit into a smart watch. Over the course of five years, we invented, designed and built the Soli radar chip and platform, taking it from early-stage prototypes to a single solid-state component that can be integrated into consumer devices.
Soli will ship in the Google Pixel 4¹ phone after years of iterative design and engineering, working with experts in semiconductor design, signal digital processing, algorithm development, machine learning and interaction design.
Powered by Machine Learning.
A custom-built ML and data collection pipeline allowed us to design a robust ML model. Using this model, Soli can reliably understand a large range of possible movements. In the case of Pixel 4, the model runs on device, never sends sensor data to Google servers, and helps it interpret the motion to Quick Gestures.
How it works
Soli’s radar emits electromagnetic waves in a broad beam. Objects, such as a human hand, within the beam scatter this energy, reflecting some portion back towards the radar antenna.
Properties of the reflected signal, such as energy, time delay, and frequency shift capture rich information about the object’s characteristics and behaviors, including size, shape, orientation, material, distance and velocity.
By processing the temporal signal variations and other captured characteristics of the signal, Soli can distinguish between complex movements to understand the size, shape, orientation, material, distance and velocity of the object within its field.
A sophisticated and versatile chip.
The custom Soli chip greatly reduces system design complexity and operates with low power consumption. In our journey toward this form factor, we rapidly iterated through several hardware prototypes, beginning with a large bench-top unit built from off-the-shelf components.
We developed and evaluated chip designs based on two modulation architectures: a Frequency Modulated Continuous Wave (FMCW) radar and a Direct-Sequence Spread Spectrum (DSSS) radar. Both chips integrate the entire radar system into a small package, including multiple beam forming antennas that enable 3D tracking and imaging. And, unlike traditional radars, Soli has no moving hardware components.
A custom pipeline for spatial understanding.
The Soli interaction pipeline implements algorithmic stages of increasing data abstraction from raw radar signal to application-specific gesture labels. This pipeline uses several stages of signal abstraction: from raw radar data to signal transformations, a custom-built machine learning training infrastructure for abstracting features, detection and tracking, gesture probabilities, and finally, UI tools to interpret gesture controls.
This sensing paradigm is efficient for on device bandwidth constrained bandwidth constraint systems and does not require high spatial resolution. In fact, Soli’s spatial resolution is coarser than the scale of most finger gestures. Instead, Soli’s fundamental sensing paradigm relies on resolving motion by extracting subtle changes in the received radar signal over time. By processing these temporal signal variations, Soli can identify and recognize complex movements within its field. The Soli libraries extract real-time signals from radar hardware, outputting signal transformations, high precision position and motion data, and gesture labels and parameters at frame rates from 100 to 10,000 frames per second.
Lightweight and hardware agnostic, our interaction pipeline allows us to use the same algorithms and software on different types of radar. Its efficient implementation enables touchless gesture interaction on low-power and cost-effective embedded platforms used in wearable, mobile and IoT applications.
Soli’s capabilities have improved, and our design vision has expanded, as our development has progressed. Take a look at where we began in order to understand the fundamentals of Soli.
Co-authored by Soli team