Search

Unlocking Mass-Market ADAS with AI-enabled Sensor Processing

May 30, 2024

Unlocking Mass-Market ADAS with AI-enabled Sensor Processing

By: Abhay Rai, Chief Product Officer, indie Semiconductor

Automotive safety has been a persistent challenge in the industry, with fatal crashes claiming lives at an alarming rate worldwide. While legislation and enforcement play a key role in addressing this issue, implementing Advanced Driver Assistance Systems (ADAS) and increased driver automation are drivers in enabling the creation of uncrashable cars, no pun intended.

With this, indie recently announced a strategic investment in Expedera, a leading provider of scalable Neural Processing Unit (NPU) semiconductor intellectual property (IP). Through this partnership, indie will leverage Expedera’s NPU IP to develop innovative next-generation ADAS solutions with embedded artificial intelligence (AI) processing capabilities.

NPUs are the critical component required for processing neural networks used in AI. In automotive applications, NPUs process sensor inputs for image recognition and object detection to allow downstream decision-making events. This makes them pivotal to advancing AI-based ADAS performance in modern vehicles.

Automated driving features like parking assist, adaptive cruise control, and lane departure warnings leverage a growing number of sensors to read their environment. Cameras and radars (and soon LiDAR’s) capture the vehicle’s surroundings at high resolutions to create a more complete understanding of the vehicle’s environment. indie’s portfolio of ADAS sensing solutions spans cameras, radars, LiDARs and ultra-sonic; we are the only silicon company that has a portfolio that spans all sensor modalities. This means we are uniquely positioned to develop next-generation sensor processing solutions that leverage our multi-modal expertise to bring highly integrated processing solutions to market, including for sensor fusion. These next-generation ADAS solutions will require multiple processing elements, including image signal processing (ISP), digital signal processing (DSP) and general system processing (via CPU’s). Additionally, NPUs will be critical to enable highly efficient processing of neural network-based AI workloads.

By collaborating with AI leader Expedera and leveraging NPU IP customized for indie based upon their Origin solution, indie will be able to offer ADAS solutions with optimum AI performance with low latency, high integration, and power efficiency. We believe scalability is key to allow OEMs to deploy ADAS across their entire model line-up including entry models, and not just premium vehicles. Distributed-intelligence architectures, whereby some AI-enabled processing of sensor data occurs at – or proximal to – the sensor is key to this scalability, bringing multiple benefits such as reduced wiring cost and weight, sensor optionality, lower power consumption and better thermal management. This partnership is a significant step towards indie’s pursuit of the vision of the uncrashable car.

A Q&A with Paul Karazuba, vice president of marketing at Expedera, outlines some of the technical aspects of Expedera’s NPUs and what this partnership brings to the automotive industry.

What sets Expedera apart from other Neural Processing Unit semiconductor intellectual property?

Expedera’s differentiator is our ‘packet based’ NPU architecture, which allows us to run AI networks at much lower power and higher processing efficiency than typical NPUs.

Neural networks are made up of a series of layers, which can be thought of as a series of three-dimensional blocks. Within the same network, the size of those blocks will vary considerably. When different networks are compared, those sizes vary immensely (see figure below). 


Typically, NPUs cannot break the layers into more processor-friendly pieces, leading to tremendous processing and memory overhead. In fact, these NPUs are often only performing processing 20-40% of the time. The rest of the time the NPUs are idle, waiting for new data to come from system memory, burning power all throughout.

Expedera’s architecture specifically addresses this issue and was an important differentiator for indie and the vision of an uncrashable car. Our patented NPU solution, ‘Origin’, breaks down those layers into much smaller ‘packets’, shown below. By packetizing, Origin allows a greater memory- and processing-friendly system. This means that Origin is more efficient and able to run larger networks without penalty.


What value does indie Semiconductor bring to the work that you’re doing at Expedera?

indie’s product lineup, deep expertise, and primary focus on the automotive market are absolutely invaluable to the Expedera team and the opportunity we foresee for Origin.

As an IP supplier, Expedera works across several industries, including consumer electronics and automotive. However, as an IP supplier, we don’t have the relationships or OEM knowledge that a market-focused chipmaker like indie. As a leading automotive chip and solution provider, indie has visibility into automotive trends and needs, along with relationships, that are impossible for many IP suppliers like us to attain.

What are the top three automotive challenges that NPUs like Expedera’s Origin face?

The top three challenges NPUs face are scalability, power consumption, and flexibility:

  • Scalability – The AI processing needs of ADAS applications vary greatly. From pedestrian rear emergency braking, to lane keeping assist, traffic light and sign recognition, all the way through to the higher levels of automation from L1 to L4, the processing capability – and software tooling – of the NPU needs to be scalable to efficiently grow with workload.
  • Power Consumption – Much of today’s automotive AI processing has significant power consumption. In addition to the system thermal management challenges this causes,  additional AI power consumption can disproportionately impact an EV’s battery life. AI architectures need to be high-performance to address the differing ADAS AI workloads while maintaining power efficiency in a world of growing EV deployment.
  • Flexibility — New neural networks are evolving frequently. An NPU architecture that is flexible to adapt to new network types allows future proofing and upgradeability over a product’s lifetime. Additionally, automotive AI deployments increasingly require multiple concurrently-run neural networks, and it is key that NPUs can support this growing requirement.

How are Expedera and indie helping to solve those challenges?

AI and automotive are increasingly intertwined; next-generation ADAS use cases will rely on AI to deliver the necessary processing workloads. indie has market-leading multi-modal sensing solutions, and Expedera offers best-in-class AI processing. The combination of indie’s technology and customized-Origin AI processing is formidable. Working closely together, we will exploit the inherent performance advantages of Origin and customize deployments on upcoming indie products which best match the scalability ADAS needs of OEM, without the penalties and poor performance that come from typical NPUs.

Working alongside Expedera, indie will continue to advance our AI-enabled sensing solutions as we work towards creating a safer, more secure future for drivers, passengers and road users worldwide.

To learn more about the partnership, visit https://investors.indiesemi.com/news/news-details/2024/indie-Semiconductor-Announces-Strategic-Investment-in-AI-Processor-Leader-Expedera/default.aspx.

To learn more about the innovative work that Expedera is doing, visit https://www.expedera.com.