Maxim neural accelerator enables AI at IoT edge
- October 21, 2020
- Steve Rogerson

Californian electronics company Maxim Integrated has produced a neural network accelerator chip that enables IoT artificial intelligence (AI) in battery-powered devices.
The Max 78000 can reduce energy consumption and latency by a factor of over 100 to enable complex embedded inference decisions at the IoT edge.
The microcontroller moves AI to the edge without performance compromises in battery-powered IoT devices. Executing AI inferences at less that 100th the energy of software improves run-time for battery-powered AI applications, while enabling complex AI use cases previously considered impossible.
These power improvements come with no compromise in latency or cost: the device executes inferences 100 times faster than software running on low power microcontrollers, at less cost than using FPGAs or GPUs.
AI technology allows machines to see and hear, making sense of the world in ways that were previously impractical. In the past, bringing AI inferences to the edge meant gathering data from sensors, cameras and microphones, sending those data to the cloud to execute an inference, then sending an answer back to the edge. This architecture works but is very problematic for edge applications due to poor latency and energy performance.
As an alternative, low-power microcontrollers can be used to implement simple neural networks; however, latency suffers and only simple tasks can be run at the edge.
By integrating a dedicated neural network accelerator with a pair of microcontroller cores, the Max 78000 can overcome these limitations, enabling machines to see and hear complex patterns with local, low-power AI processing that executes in real time.
“We’ve cut the power cord for AI at the edge,” said Kris Ardis, executive director at Maxim Integrated. “Battery-powered IoT devices can now do much more than just simple keyword spotting. We’ve changed the game in the typical power, latency and cost trade off, and we’re excited to see a new universe of applications that this innovative technology enables.”
Applications such as machine vision, audio and facial recognition can be made more efficient.
At the heart of the device is hardware designed to reduce the energy consumption and latency of convolutional neural networks (CNN). This hardware runs with little intervention from any microcontroller core, making operation more streamlined. Energy and time are only used for the mathematical operations that implement a CNN.
To get data from the external world into the CNN engine efficiently, designers can use one of the two integrated microcontroller cores: the low-power Arm Cortex-M4, or the even lower power Risc-V core.
Tools are available for a more seamless evaluation and development experience. The Max78000 EVKit# includes audio and camera inputs, and out-of-the-box running demos for large vocabulary keyword spotting and facial recognition. Documentation helps engineers train networks in the tools they are used to using such as TensorFlow or PyTorch.
“Artificial intelligence is frequently associated with big data cloud-based solutions,” said Kelson Astley, research analyst at Omdia. “Anything that can cut the power cord and reliance on big lithium-ion battery packs will help developers build AI solutions that are nimbler and more responsive to environmental conditions in which they operate.”








