Google AI is committed to bringing the benefits of AI to everyone, and one of the ways we do this is by building tools to ensure that everyone can access AI. In 2016 I started AIY (do-it-yourself AI) as a new initiative from Google to help students and makers learn about AI through affordable, hands-on kits.
Over the past two years, we released the AIY Voice Kit and AIY Vision Kit, powered by Raspberry Pi boards and Google’s AI tools to inspire the next generation of engineers. In May 2017, we released Voice Kit as a surprise extra included with The MagPi Magazine #57. We were humbled by the demand and the organic community growth, which resulted in the kits being picked up by Micro Center stores across the U.S. People love building their own voice interfaces, and we thought it would be great to introduce another core pillar of AI to the same audience: image recognition. So in May 2018, we produced Vision Kit and distributed it even more broadly through Adafruit and Target stores in the U.S. As demand has grown worldwide, we’ve added Mouser Electronics as an international distributor.
A new board
Since our launch, we’ve been thrilled with the response from students and makers. Looking at our consumers, we were excited to find industry professionals using AIY kits to prototype new product ideas with on-device AI to enhance user privacy, power efficiency, and fast performance when processing neural networks on device. So, this year we decided to invest more with this audience and offer a new platform of reliable hardware components and software tools that allow them to prototype with on-device AI, in a way that easily scales to production.
We call it Coral, a platform for experimentation for on-device AI. We
chose the name because it represents an
evolving, diverse ecosystem and we want Coral to foster a thriving AI ecosystem with all audiences, from makers to professionals, because we believe that great innovation happens when we all work together.
Our new products include Google’s Edge TPU chip, a purpose-built ASIC that accelerates neural networks running on-device, delivering fast processing without the overhead of passing data up to the cloud. It also makes it easy to enhance privacy by keeping user data on the device, under the user’s control. And, for those who need it, it works with Google Cloud’s suite of Internet of Things (IoT) services to allow for remote management and easy neural network development.
Coral products currently offer the Edge TPU in two formats, as a fully integrated development board and a pluggable accessory to existing systems.
Coral Edge TPU dev board
Our dev board was designed for professionals who need a fully integrated system. It uses a System on Module (SoM) design where the module containing the CPU/GPU/TPU snaps into the baseboard using high density connectors. They combine to form a single-board computer for prototyping, and the SoM is available as a stand-alone part that can be purchased in bulk for a production line.
The SoM includes the new NXP iMX8M SoC, connected to our Edge TPU over the PCIe bus. It also includes Bluetooth 5.0, dual-band 2.4/5GHz 802.11ac, and a crypto chip for secure cloud connections when needed. The default OS is called Mendel, our derivative of Debian Linux for Coral boards.
The baseboard includes a variety of connectors to make it easy to bring in sensor data and attach to peripherals, including a 40-pin GPIO header configured in a way that’s consistent with many accessory boards on the market today.
The Edge TPU USB Accelerator
Our USB Accelerator is a pluggable accessory to upgrade existing systems, for example a Raspberry Pi board. In fact, we designed the case to have the same footprint as Raspberry Pi Zero and the same mounting holes, assuming this would be a popular setup.
It has the same Edge TPU chip as the single-board computer for USB 2.0/3.0 systems. We’re leading with Linux drivers first and will support other OSs soon. And a PCIe card version of the accelerator is also in the works.
We have a Python SDK to let application developers interact with the Edge TPU chip. And neural network developers will use TensorFlow Lite, which uses a smaller set of operators designed for embedded systems, along with our web hosted compiler, to produce their models.
To help you get started, we’re releasing a number of sample Python applications and pre-compiled models built using open source architectures that have been tested for the Edge TPU. You’ll be able to run these out of the box, re-train them for your needs, or just create your own from scratch using the software tools.
Looking ahead, we plan to expand Coral for a broad range of use cases in a way that’s easy to prototype and scale to production. Our AIY kits will continue to be available, and we’re looking at how to evolve them further using our Edge TPU products.
We hope you’ll find our new platform inspiring for your ideas, and we’re excited to see what you’ll make!