Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.
If you’re a software dev looking to get a head start on AI development at the edge, why not try on Google’s new hardware for size? The search company today made available the Coral Dev Board, a $150 computer featuring a removable system-on-module with one of its custom tensor processing unit (TPU) AI chips. It also debuted the Coral USB Accelerator, a $74.99 USB dongle designed to speed up machine learning inference on existing Raspberry Pi and Linux systems, and a 5-megapixel camera accessory that starts at $24.99.
TPUs, for the uninitiated, are application-specific integrated circuits (ASICs) developed specifically for neural network machine learning. The first-generation design was announced in May at Google I.O, and the newest — the third generation — was detailed in May of last year.
The TPU inside the Coral Dev Board — the Edge TPU — is capable of “concurrently execut[ing]” deep feed-forward neural networks (such as convolutional networks) on high-resolution video at 30 frames per second, Google says, or a single model like MobileNet V2 at over 100 frames per second. It sends and receives data over PCIe and USB, and it taps the Google Cloud IoT Edge software stack for data management and processing.
Intelligent Security Summit
Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.
Edge TPUs aren’t quite like the chips that accelerate algorithms in Google’s data centers — those TPUs are liquid-cooled and designed to slot into server racks, and have been used internally to power products like Google Photos, Google Cloud Vision API calls, and Google Search results. Edge TPUs, on the other hand — which measure about a fourth of a penny in size — handle calculations offline and locally, supplementing traditional microcontrollers and sensors. Moreover, they don’t train machine learning models. Instead, they run inference (prediction) with a lightweight, low-overhead version of TensorFlow that’s more power-efficient than the full-stack framework: TensorFlow Lite.
Toward that end, the Dev Board, which runs a derivative of Linux dubbed Mendel, spins up compiled and quantized TensorFlow Lite models with the aid of a quad-core NXP i.MX 8M system-on-chip paired with integrated GC7000 Lite Graphics, 1GB of LPDDR4 RAM, and 8GB of eMMC storage (expandable via microSD slot). It boasts a wireless chip that supports Wi-Fi 802.11b/g/n/ac 2.4/5GHz and Bluetooth 4.1, a 3.5mm audio jack, and a full-size HDMI 2.0a port, plus USB 2.0 and 3.0 ports, a 40-pin GPIO expansion header, and a Gigabit Ethernet port.
The Coral USB Accelerator similarly packs an Edge TPU and works at USB 2.0 speeds with any 64-bit Arm, or x86, platform supported by Debian Linux. In contrast to the Dev Board, it’s got a 32-bit Arm Cortex-M0+ microprocessor running at 32MHz accompanied by 16KB of flash and 2KB of RAM.
Google says PCIe versions that snap into M.2 or mini-PCIe expansion slots are on the way.
As for the camera, which is manufactured by Omnivision, it has a 1.4-micrometer sensor with an 84-degree field of view, 1/4-inch optical size, and 2.5mm focal length, and it connects to the Dev Board over a dual-lane MIPI interface. In addition to automatic exposure control, white balance, band filter, and blacklevel calibration, it features adjustable color saturation, hue, gamma, sharpness, lens correction, pixel canceling, and noice canceling.
Both the SOM from the Dev Board and PCIe versions of the Accelerator are available for volume purchase, and Google says it’ll soon release the baseboard schematics for those who want to build custom carrier boards.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.