In June 2017, shortly after Misty Robotics spun out of Boulder, Colorado-based startup Sphero, it announced intentions to develop a “mainstream” home robot with the help of hobbyists and enthusiasts the world over. With $11.5 million in venture capital from Venrock and Foundry Group in the bank, it wasted no time in getting to work, unveiling a development platform called Misty I at the 2018 Consumer Electronics Show. A few months later, Misty took the wraps off the second iteration of its robot — Misty II — and made 1,500 units available for preorder.

It’s been a long time coming, but following a successful Kickstarter campaign in which Misty raised just short of $1 million, the company announced this week that it’s begun shipping Misty II units to 500 early backers. Alongside the hardware, the startup says it’ll soon publicly launch its JavaScript-based software development kit, which will include a Visual Studio Code extension and API explorer in addition to samples, documentation, and command center and skill runner web interfaces.

For new customers, Misty II is now available starting at $2,399 (a 25% discount off of MSRP) ahead of an official market launch later this year.

“Delivering Misty II to our crowdfunding backers is a major milestone for the company as they will play a special role in helping us prepare Misty for her official market launch later this year,” said Misty Robotics founder and head of product Ian Bernstein. “Our backers are investors in the vision of personal robots becoming a reality in our lives. We are very excited to see how hundreds of developers bring Misty to life.”

The Misty II robot for developers.

Above: The Misty II robot for developers.

Image Credit: Courtesy Misty Robotics

For the uninitiated, the vaguely humanoid Misty II weighs in at six pounds, stands 14 inches tall, and packs electronics like a 4K Sony camera, a 4.3-inch LCD display, twin chest-mounted speakers, eight time-of-flight sensors, and three Qualcomm Fluence Pro-powered far-field mics. An Occipital sensor array affixed to its “forehead” sports a 166-degree wide-angle camera and IR depth sensors, enabling simultaneous localization and mapping (Occipital’s Bridge Engine handles the spatial computing bit). And on its back sits a module compatible with development boards like the Raspberry Pi 4 and Arduino Uno.

Misty II’s head — which sits on a “neck” with three degrees of freedom (3DoF) — has capacitive touch sensors for additional controls, and a flashlight embedded near the right “eye.” There’s a pair of concealed chipsets (a Qualcomm Snapdragon 820 and 410) that perform heavy computational lifting, and a swappable panel that plays nicely with different camera types, laser pointers, and other third-party sensors and controls.

The sensors work in tandem to guide Misty II back to its included charging station, even in the dark. All four corners of the base have time-of-flight bump sensors so it doesn’t run into objects and obstacles or fall off of things like coffee tables. While the arms don’t do anything, they’re designed to be extensible, so that developers can swap them out for things like cupholders.

On the software side of the equation, Misty II’s operating system is Windows IoT Core and Android 8 Oreo, the latter of which supplies navigation and computer vision. Misty II works with third-party services such as Amazon’s Alexa, Microsoft’s Cognitive Services, and the Google Assistant, and it allows owners to create custom programs and routines, including ones that tap into machine learning frameworks like Google’s TensorFlow and Facebook’s Caffe2.

Misty says that over the past few months, developers with early access have begun to imbue Misty II with facial recognition, robust locomotion, and over 40 “eyes” and more than 80 sounds. Moreover, it says that early customers are building skills for inventory data collection, home property inspection, environmental monitoring, spatial data collection, eldercare, autism therapy, and personal engagement.

Misty isn’t exactly rushing to market — it has a 10-year plan, and it’s taking a hands-on approach to development. While a few preprogrammed skills (like autonomous driving and voice recognition) are available on GitHub, the idea is to let developers come up with use cases that the founding team might not have thought of.

That said, Misty II won’t be bereft of capabilities out of the box. Here’s which will be available:

  • Facial detection and recognition
  • Mobile sound localization
  • Image and graphic display
  • Audio playback
  • Sequential and one-time photo capture
  • Audio recording
  • Wake word (Misty II can be woken with the phrase “Hey, Misty”)
  • Raw sensor access
  • Programmable personality
  • Skill sharing via the Misty community forum and GitHub

Features coming soon include video capture up to 10 seconds and 3D mapping integration.