000-000-0000 [email protected]

Horizon 2020: IMIR-UP in a Nutshell

IMIR-UP is funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No 849692
Project Timeline Feb 1 2019 – May 31 2020: The project’s main objective is to create a cloud-based MLaaS (Machine Learning as a Service) platform that can easily integrate Elliptic Labs’ virtual sensors with smart phones and devices. Device manufacturers who use this platform will only need to define their product’s operating environment (e.g. memory, storage, computation capacities, power consumption, mechanical setup, and component characteristics) and select the range and sensitivity they desire out of our software. Elliptic Labs will then provide a software integration package customized to their product, as well as a set of machine learning classifiers which have been created by our expert team. These classifiers, are capable of running on any smart device due to their low processing power requirements. Furthermore, Elliptic Labs has a full suite of tools available on its cloud to allow OEMs to maximize the software’s machine learning potential while making it possible for Elliptic Labs to provide enhanced support to its ever-growing customer base.

IMIR-UP Provides Benefits for

Our Customers

  • Reduced cost
  • Faster integration
  • Remote access of the “plug-n-play”
  • Features of the MLaaS
  • High performance products
  • Beautiful designs
  • Better user experience

Our Channel Partners

  • Faster integration
  • Easy to scale up
  • Access to new markets and opportunities

Our Investors

  • Higher returns

IMIR-UP Accomplishments to Date

Web Tools

  • Elliptic Manager
  • Data Recording Software
  • Data Quality Checker
  • Data Query and Upload

ML Pipeline

  • Training and building machine-learning models

Cloud Infrastructure

  • Backend development to manage access, data storage and cloud computing
  • Frontend development to support customer needs

Touchless Gestures White Paper

Even before the COVID-19 pandemic increased our need for technology, the world had become attached to our electronic devices. They are with us 24/7. At work, we’re in front of a screen for at least 8 hours a day. When we finally get away from the desk, we pull a smartphone out from our pocket and shove the screen in front of our face to watch a video or track our friends’ social media posts. We plop down on the couch with a laptop or stare at a TV on the wall, binge-watching the latest and greatest shows. We ask smart speakers for answers to basic questions (in the hopes they will actually understand what we’re asking), while ignoring the beeps and notifications from our smart refrigerators and ovens. We are besieged by smart devices. But how smart are these devices, really? For instance, when was the last time a “smart” device responded to a touchless gesture that was intuitive and natural, not learned? The simple fact is we have conflated the term “smart” with “has a touchscreen,” and the two concepts could never be more different.

One of the main reasons for the broad acceptance and success of platforms like iOS and Android is that touchscreens have historically been easier for both humans and machines to perform and interpret than natural user interfaces. It is easier for us to learn how to interact with touchscreens in a way that devices can understand than it is for us to modify our natural gestures and movements to conform to a device’s limited ability to comprehend gestures. As a result, the world has so accepted touchscreen gestures as the defacto standard for device interaction that touchscreens are now not only expected on devices, but their very existence defines whether or not a device is smart.

Yet innovation and growth demands a different answer. Smart devices can no longer require users to learn how to interact with them on the device’s own, limited terms. Instead, the true smart device of the future must learn to adapt to its user. It must enter the world of natural language. It must enter the world of touchless gestures.

Not only are touchless gestures crucial for natural human-device interactions, but they are also critical for ensuring public safety in environments where social distancing needs and fear of contamination make shared touchscreens a liability. COVID-19 has made the world aware of what front-leading OEMs already knew — that innovations targeting device recognition and responses to touchless gestures are the needed next step in smart device evolution.

Elliptic Labs’ AI Virtual Smart Sensor Platform is a powerful choice for OEMs seeking to bring these innovations to a global audience. Since 2006, Elliptic Labs has been developing algorithms that provide intuitive and robust touchless gestures for a variety of consumer electronic devices. By utilizing Elliptic Labs’ virtual gesture sensors, OEMs can manufacture devices capable of tracking objects in 360° 3D space with millimeter accuracy. Elliptic Labs’ proven track record of delivering fast and precise distance measurements make it the leader in providing responsive touchless gestures.

Because Elliptic Labs’ virtual gesture sensors use ultrasound to measure distance, devices including them are able to detect touchless gestures using only a single off-the-shelf microphone and a single off-the-shelf speaker. Elliptic Labs’ AI Virtual Smart Sensor Platform provides highly optimized, real-time neural networks that require minimal system resources to function, eliminating the need for Internet connectivity and communication to the Cloud. Furthermore, because its platform is a software-only solution, it provides OEMs maximal freedom to create beautiful devices with elegant industrial and mechanical designs.

OEMs require touchless gesture solutions that are robust and intuitive, meaning they require no learning or training of their users. Elliptic Labs’ world-leading ultrasonic touchless gestures are intentionally designed around larger, natural motions such as multi-directional swipes, double taps, and approach gestures for this reason. The resulting experiences are easily used by both 5-year olds and 75-year olds, as well as everyone in between.

In addition to providing valuable technology, Elliptic Labs’ extensive background in the smartphone industry provides reliability and confidence at a company level, offering a level of trust which can usually be found only in larger partners. Its Virtual Proximity Sensors have been deployed to millions of smartphone devices around the world through customers like Xiaomi, OnePlus, and Smartisan, proving that Elliptic Labs has both the scalability and expertise needed to deliver its solutions to the next billion smart devices.

Beyond providing robust and responsive virtual gesture sensors, Elliptic Labs’ AI Virtual Smart Sensor Platform also offers AI and machine-learning expertise. This platform optimizes the creation and refinement of classifiers, thereby decreasing the staff, resources, and data necessary to create virtual gesture sensors and thus increasing OEM speed-to-market. The platform also enables real-time neural networks that require minimal hardware resources, allowing OEMs to create and offer consistent touchless gestures across product lines and applications.

With its superior product performance, employee expertise, and production experience, as well its provision of a simplified path for OEMs to create smarter devices featuring intuitive touchless gestures, Elliptic Labs is perfectly positioned to power the next revolution in smart-device UI.

AI Virtual Smart Sensor Platform and Virtual Smart Sensors are trademark of Elliptic Labs.

This work is partially supported by the EU H2020 Grant for SME Instrument Phase II, Project IMIR-UP (Grant No.849692).