This website is best viewed in portrait mode.
Enabling Edge AI through future ready SDK
Edge AI is here to stay! Artificial intelligence (AI) is powering many real-world applications which we see in our daily lives. AI, once seen as an emerging technology, has now successfully penetrated into every industry (B2B & B2C) Banking, logistics, healthcare, defence, manufacturing, retail, automotive, consumer electronics. Smart Speaker like Echo, Google Nest, is one such example of Edge AI solutions in the consumer electronics sector. AI technology is powerful, and human-kind has set its eye on the path of harnessing its potential to the fullest. Intelligence brought to the device can be very useful and creative.
The key requirements that need to be factored in designing Edge AI architecture are — bandwidth, latency, privacy, security, power consumption. While envisioning an Edge AI solution, these requirements need to be thoroughly weighed in terms of what feature can be traded off and yet be effective.
AI enables the machines to perform cognitive functions such as perceiving, reasoning, and learning similar to humans but much faster and accurate. AI implementation is majorly classified into two phases — Learning and Inference.
Learning allows the machines to parse through a large number of existing datasets and be able to learn by recognising patterns. Due to its compute-intensive nature, learning happens on the cloud, as it provides access to vast storage capacity and AI accelerated hardware for faster processing. Inference phase involves the ability to make decisions on the new data based on the learnings from already processed data. For ADAS, the inference has to be fast and on the Edge device whereas learning which is compute-intensive can happen in the cloud.
The self-driving vehicle would receive and generate a large amount of heterogeneous data which can be used for better predictability and reliability. Predictability of vehicle health conditions can be done on the cloud with an AI algorithm churning all that data and making sense out of it. Whereas pedestrian detection is time-critical as it has to actuate the automated braking system as well, zero-latency is expected. ADAS functionalities need to be processed in real-time. In this scenario, a self-driving vehicle needs to be considered as an edge device when it comes to ADAS functionalities. This is indeed possible with high-end GPUs available being assigned for safety-critical tasks while other vehicle telemetry data which is not time-critical can be processed on cloud.
In the financial services space, on the other hand, financial data needs to be secured and always available, so the power consumption would be high. However, AI-based inference like pre-approved loan being sanctioned send to end-consumers mobile is not a time-critical event, latency is accepted. All computing for financial services can be done on the cloud with high-end technology infrastructure enabled with AI accelerated hardware platforms and security solutions and yet be cost-effective. So based on the end-application, there are trade-offs that would be valuable and serve the end-purpose.
Edge AI market overview
Globally, AI chipsets market size is expected to be valued at USD 7.6 billion in 2020 and likely to reach USD 57.8 billion by 2026, at a CAGR of 40.1% during the forecast period. Implementation of AI is the current trend in chip technology, and it’s going to stay that way. Many leading semiconductor companies and venture capitalists see it as the right tech-front for investment.
Changing dynamics in terms of hardware consideration for learning and inference
The Edge AI hardware market is segmented into CPU, GPU, ASIC, and FPGA. ASICs enable high processing capability with low-power consumption, making them perfectly suited for Edge devices in many applications. In contrast, GPUs could be overpriced for an Edge solution, but when we talk of the autonomous vehicle, GPUs would be the right fit to deal with image manipulations at lightning speed. Deep learning frameworks are still evolving, making it hard to design custom hardware; reconfigurable FPGAs provide device manufacturers close to ASIC-type performance, reprogrammed to serve the changing needs.
It is estimated that an Edge AI architecture inference implemented on ASIC will grow from 30% to 70% and 20% on GPU by 2025. While the whole training activity, which was distributed among CPU & GPU equally in 2017, is estimated to move to 70% on ASIC, 20% on FPGA by 2025. Edge devices are basically embedded products with resource constraints, and hence, Edge AI implementation needs to be thought of as an application-specific use case. AI-based applications for Edge devices are intelligent robots, autonomous vehicles, smart home appliances, among others. The primary applications that run over Edge AI are related to image/video, sound, and speech, natural language processing, device control system, and high-volume computing.
We only spoke of the hardware so far but what about the software that makes this hardware execute to its full potential.
Software: The new frontier specialising in Edge AI
The global Edge AI software market is estimated to cross $3 Billion by 2027. Edge AI software includes the complete software stack that enables the AI algorithms to be processed on the Edge device to make appropriate decisions without the need for connectivity.
At this juncture of technology innovations, Semiconductor companies enable AI solutions to realise newer strategies to grow their business and find wider hardware adoption. Many semiconductor companies are no longer seen as just component providers but as a complete platform solution provider. Semiconductor companies realise value gained from software and services associated with the chipsets that allow for the rapid adoption of their platforms by the device manufacturer.
To increase their hardware market adoption, semiconductor companies are investing heavily in the software development toolkits integrated with ML/DL frameworks to deliver as a package that allows developers to quickly get started with all components for embedded systems development at ease. This allows the device manufacturer to effectively utilise the silicon resources in a shorter time-span and gain an advantage by being first to market. Edge AI chipset market is witnessing software and associated technology stacks facilitating wider adoption and faster development cycle.
AI processing at the Edge has allowed semiconductor companies and electronic device manufacturers to look beyond the horizon and re-define themselves with innovative solutions.
Edge AI driving digital transformation
In the industrial sector, to witness how useful Edge AI could be, consider all the machines as Edge nodes working around the clock, which would result in high chances of wear and tear. It might even break down at some point in time, resulting in loss of production. In such a case, each machinery can be thought of as an Edge device, with an Artificial Intelligence enabled chip trained to continuously monitor the working condition of the machine based on specific parameters gathered through sensors. If an abnormality is noticed, the controller can trigger an alarm suggesting maintenance requirements.
Such predictive analytics in a factory environment will be crucial to reduce the downtime and allow proper planning and scheduling of the maintenance activity. Real-time analytics and the ability to analyse the device’s contextual information locally enable the machine to provide actionable insights and enhance its own lifetime.
Edge AI guarantees immense potential
Smart home appliances need not have higher processing capability, whereas data-centres require high processing capability. Smart home appliances (ex. Smart Speakers/Microwave Oven/ Water purifier) can do with lower processing capability based on the use cases that are expected to perform. Let’s consider the voice-controlled feature that many young and adult users expect as an essential feature in any smart device post-COVID-19, like smart speakers, which allow user interaction with the device through just voice commands, provides a safe way of interaction. This is possible due to highly efficient Neural Decision Processors, which integrates deep learning techniques.