Interested in learning what's next for the gaming industry? Join gaming executives to discuss emerging parts of the industry this October at GamesBeat Summit Next. Register today.
Lidar sensors, which measure the distance to a target by illuminating the target with laser light and measuring the reflected light, hasn’t been Intel’s domain historically. But that’s changing today with the launch of the RealSense Lidar Depth Camera, a product family that incorporates solid-state lidar technology — i.e., lidar without moving parts — into cameras designed to measure depth.
The first RealSense Lidar Depth Camera — the L515, which starts at $350 and is available for preorder starting today, with shipments expected the week of April 27 — uses a proprietary micro-electro-mechanical mirror-scanning technique that enables better efficiency compared with time-of-flight alternatives. Intel claims its power consumption for depth streaming — less than 3.5 watts — makes the L515 the most energy-efficient high-resolution lidar camera on the market.
The L515 packs a lidar sensor that delivers depth precision throughout its entire range (25 centimeters to 9 meters), as well as an RGB camera, a Bosch-made inertial measurement unit, a gyroscope, and an accelerometer. Thanks to an internal vision processor, motion blur is reduced by an exposure time that’s less than 100 milliseconds. As for photon latency, it clocks in at around 4 milliseconds, making the L515 well suited to real-time applications like autonomous navigation, gesture recognition, and hand tracking.
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.
Both the lidar sensor and RGB camera can record up to 30 frames per second, though at different resolutions (1024 x 768 depth resolution, up to 1920 x 1080 RGB resolution). The lidar sensor’s field of view is roughly 70 degrees vertically and 55 degrees horizontally, enabling the capture of up to 23 million points of depth per second.
Despite all the components onboard, the L515 has a diameter of 61mm and a height of just 26mm. It weighs about 3.5 ounces and is designed to be situated on any prototype or attached via mounting points to a tablet or phone for use in handheld room-scanning or volumetric measurement.
On the software side, Intel’s RealSense SDK 2.0 — which works with Windows, Linux, Android, and MacOS — features wrappers for common platforms, languages, and engines — including Python, ROS, C/C++, C#, Unity, Unreal, OpenNI, and NodeJS. Intel asserts that the full perception offered by the L515 provides strong business value in the logistics industry, particularly for companies looking to automate inventory management with volumetric measurement of products. Other applications might be found in 3D scanning, health care, retail, robotics, and more.
Intel launched RealSense Technology — a suite of solutions designed to imbue machines with depth perception and object-tracking capabilities — several years ago in collaboration with major manufacturers. Previous generations of depth cameras were built into laptops and tablets from manufacturers that include Asus, HP, Dell, Lenovo, and Acer, as well as consumer-ready webcams from Razer and Creative.
As of January 2018, devices in Intel’s RealSense portfolio include the Depth Camera SR305, a standalone short-range coded light camera with a “streamlined” form factor and application-specific integrated circuit (ASIC) for depth calculations; the Vision Processor D4, a range of 28-nanometer vision processors designed to compute real-time stereo depth data; the Depth Module D400 series, which features active IR or passive stereo depth technology, rolling or global shutter image sensor technology, and wide or standard fields of view (depending on the configuration); and two ready-to-use depth cameras in the D435 and D415.
Intel unveiled one of the latest additions to RealSense in January. Dubbed the T265 tracking camera, it leverages the chipmaker’s Movidius Myriad 2 vision processing unit (VPU), ambient light captured with two fish-eye lenses, and simultaneous localization and mapping algorithms to spot objects and help machines — like robots and drones — keep track of their precise locations in an environment.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.