Automotive - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/applications/automotive/ Designing machines that perceive and understand. Fri, 06 Oct 2023 23:22:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Automotive - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/applications/automotive/ 32 32 Sony Semiconductor Solutions Concludes Pedestrian Safety Challenge, Announces Winners with tinyML Foundation and The City of San José https://www.edge-ai-vision.com/2023/10/sony-semiconductor-solutions-concludes-pedestrian-safety-challenge-announces-winners-with-tinyml-foundation-and-the-city-of-san-jose/ Fri, 06 Oct 2023 23:22:49 +0000 https://www.edge-ai-vision.com/?p=44440 Sony Reveals Leopard Imaging, NeurOHM, and King Abdullah University of Science and Technology, as winners of Tech for Good competition in support of the city’s Vision Zero initiatives. SAN JOSÉ, Calif., Oct. 5, 2023 /PRNewswire/ — Today, Sony Semiconductor Solutions America (SSS-A), alongside the tinyML Foundation and The City of San José, announced the final winners …

Sony Semiconductor Solutions Concludes Pedestrian Safety Challenge, Announces Winners with tinyML Foundation and The City of San José Read More +

The post Sony Semiconductor Solutions Concludes Pedestrian Safety Challenge, Announces Winners with tinyML Foundation and The City of San José appeared first on Edge AI and Vision Alliance.

]]>
Sony Reveals Leopard Imaging, NeurOHM, and King Abdullah University of Science and Technology, as winners of Tech for Good competition in support of the city’s Vision Zero initiatives.

SAN JOSÉ, Calif., Oct. 5, 2023 /PRNewswire/ — Today, Sony Semiconductor Solutions America (SSS-A), alongside the tinyML Foundation and The City of San José, announced the final winners for the Pedestrian Safety Challenge Hackathon competition, which began in May as an effort to reduce pedestrian-involved accidents, in connection with the city’s Vision Zero initiatives.

In collaboration, the three groups joined together to encourage teams across the globe to solve for this issue, as pedestrian injuries and fatalities have become more common with issues like distracted driving, distracted walking, illegally crossing roadways, and more.

The hackathon boasted 29 participating teams from across the globe, including the United States., Germany, Lebanon, Nigeria, and Saudi Arabia, as well as teams local to the Silicon Valley and the San Francisco Bay Area (SFBA).

Mark Hanson, Vice President and Head of Marketing for System Solution Business Development at SSS-A states, “It was a pleasure to partner with tinyML and the City of San José on the important issue of pedestrian safety, especially as a native resident and with Sony Electronics’ office in the city. The groundbreaking, people-first solutions coming from these teams makes us optimistic, not just in local Vision Zero efforts, but to see these technologies be used to benefit communities around the globe.”

First place was awarded to the Leopard Imaging team, presenting a solution that features SSS’s AITRIOS™ platform and IMX500-enabled hardware, with the NeurOHM team as second place, team from King Abdullah University of Science and Technology (KAUST) in third place, and special Edge Impulse award going to the KAUST team.

Evgeni Gousev, Senior Director at Qualcomm, and Chair of the Board of Directors at tinyML Foundation says, “As a global non-profit organization with a mission to accelerate development and adoption of energy-efficient, sustainable machine learning technologies, we were enthusiastic for this collaboration with the City of San José, Sony, and other partner companies. We were very pleased to see a strong response from the tinyML Community, are grateful to all the teams and participants who have contributed their ideas and proposals for this real-world problem and would like to congratulate the finalists on delivering innovative-yet-practical solutions.”

Hanson continues, “It was very exciting for us that Leopard Imaging entered with an AITRIOS-built solution and won first place in the Hackathon. It shows that vision AI tools, like AITRIOS, can make these Vision Zero and pedestrian safety goals a tangible, low-cost, and scale-based platform to support these initiatives.”

“Through our partnership with Sony and tinyML, brilliant minds from across the world have generated ideas that will ultimately save lives in San José and beyond,” said San José Mayor, Matt Mahan.

To learn more about the Pedestrian Safety Challenge and its winning solutions, please visit the tinyML Foundation website, here.

About Sony Semiconductor Solutions America

Sony Semiconductor Solutions America is part of Sony Semiconductor Solutions Group, today’s global leader in image sensors. We strive to provide advanced imaging technologies that bring greater convenience and joy to people’s lives. In addition, we also work to develop and bring to market new kinds of sensing technologies with the aim of offering various solutions that will take the visual and recognition capabilities of both humans and machines to greater heights. Visit us at: https://www.sony-semicon.co.jp/e/

Corporate slogan “Sense the Wonder”: https://www.sony-semicon.co.jp/e/company/vision

The post Sony Semiconductor Solutions Concludes Pedestrian Safety Challenge, Announces Winners with tinyML Foundation and The City of San José appeared first on Edge AI and Vision Alliance.

]]>
Unleashing LiDAR’s Potential: A Conversation with Innovusion https://www.edge-ai-vision.com/2023/09/unleashing-lidars-potential-a-conversation-with-innovusion/ Sat, 30 Sep 2023 08:00:32 +0000 https://www.edge-ai-vision.com/?p=44237 This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. The market for LiDAR in automotive applications is expected to reach US$3.9 billion in 2028 from US$169 million in 2022, representing a 69% Compound Annual Growth Rate (CAGR). According to Yole Intelligence’s …

Unleashing LiDAR’s Potential: A Conversation with Innovusion Read More +

The post Unleashing LiDAR’s Potential: A Conversation with Innovusion appeared first on Edge AI and Vision Alliance.

]]>
This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group.

The market for LiDAR in automotive applications is expected to reach US$3.9 billion in 2028 from US$169 million in 2022, representing a 69% Compound Annual Growth Rate (CAGR).

According to Yole Intelligence’s LiDAR for Automotive 2023 report, Innovusion was the leading player in 2022, having a 28% market share. The LiDAR market in automotive is just starting, and actual LiDAR volume could be tripled in 2023, showing the strong dynamics in this market led by Chinese players, whether OEMs or LiDAR manufacturers.

Innovusion (latest news) is a global LiDAR manufacturer developing hybrid solid-state LiDAR based on 1,550nm components and using a fiber laser as a light source. They have a successful partnership with NIO, a Chinese automotive OEM, and their LiDAR is installed on the roof of these cars.

In this context, Pierrick Boulay, Senior Market and Technology Analyst at Yole Intelligence, part of Yole Group, had the opportunity to share its expertise and vision with Yimin Li, CTO and co-founder of Innovusion.

Discover the details of the conversation below.

Pierrick Boulay (PB): Please introduce your company and your position.

I am Yimin Li, the CTO and co-founder of Innovusion. I have a background in Quantum Electronics, and prior to founding Innovusion, I worked at companies such as Velodyne, Baidu, Agilent, AOSense, and GE Healthcare, accumulating over 20 years of experience and expertise in electronic, optical, and laser technologies, including LiDAR.

I founded Innovusion with Junwei Bao in Silicon Valley in 2016. It’s hard to believe that just seven years later, we are now a global leader in the LiDAR space and that our ability to design and manufacture 1550nm LiDAR at scale is unparalleled. I think that speaks volumes about not just the quality of our underlying technology but, even more so the quality of our global teams. We have R&D teams in the US, in Silicon Valley, as well as in Suzhou & Shanghai, in China. In addition, we also operate state-of-the-art, highly automated, auto-grade manufacturing bases in Suzhou and Ningbo that are fully operational. These were all developed and built from scratch and are producing and delivering at scale for our global automotive partners, like NIO and Faraday Future. For example, our flagship, automotive-grade LiDAR sensor, Falcon, is included as standard equipment on almost all NIO models (including their ET7, ES7, ET5, EC7, ES6, ET5T, and ES8), as part of their highly acclaimed Aquila autonomous driving system.

In 2022, our LiDAR achieved the No.1 volume and revenue in passenger vehicles globally (even surpassing expectations from Yole!). This year is going even stronger, and we’re on target to surpass last year’s passenger vehicle volume by the end of July 2023, with a forecasted trajectory of continued high-speed growth thereafter.

At the same time, we have signed strategic contracts with many commercial vehicle partners, such as TuSimple, Encon, Zhito, Plus, and DeepWay, to jointly promote the large-scale application of high-performance LiDAR in the commercial logistics field.

In addition, our sensors are also vital to smart transportation initiatives and programs, and we’re also partnering with numerous giants of smart transportation, highway, rail, and industrial automation globally in order to help bring the power of LiDAR to improve the efficiency, safety, and flow of traffic in cities and ports around the world.

PB: Could you briefly introduce Innovusion ‘s LiDAR technology?

Our company focuses on the development of hardware and software solutions for LiDAR. Currently, we have the Falcon series and Robin series LiDAR hardware products, as well as OmniVidi on the software side.

Our flagship LiDAR sensors are the Falcon series, which uses 1550nm laser technology. We’ve produced and shipped over 150,000 units of this sensor and are proud to be a critical part of NIO’s Aquila sensor suite and standard equipment on almost all of their new vehicles. These are the only mass-produced 1550nm LiDAR today, and that is something that we are very, very proud of. It is hard enough to get your LiDAR to work in a lab, but to then mass-produce, deliver to your customers like clockwork, and ultimately help power L2+ ADAS systems on cars all around the world? Those are the kinds of things that Junwei and I dreamt of when we started this company, and sometimes it is hard to believe how far we’ve come. From a technical standpoint, Falcon has a maximum detection range of 500 meters (250 meters @ 10% reflectivity), making it ideal for long-range applications. It also features a hybrid solid-state scanning mechanism that allows for high-precision scanning and dynamic focusing. When accounting for the precision, the production at scale, and our ability to work closely with partners and customize the details as needed, Falcon really stands in a class of its own at the moment. And it keeps getting better


Falcon – Automotive-grade Ultra-long Range Front-view LiDAR

Next is our Robin platform, which has a 905nm laser light source. It also leverages hybrid solid-state technology and incorporates cutting-edge electronics and optical technology and a highly modular architecture to achieve excellent product performance and adaptability in various laser detection scenarios.

Robin comes in two models – Robin-E & Robin-W:

  • Robin-E is an advanced long-range forward-looking LiDAR that currently achieves the leading detection level amongst forward-looking lasers in the industry. It has a maximum detection range of 250 meters and a standard detection range of 180m@10%. It features a resolution of 0.1° × 0.2° and a field of view of 120° × 24°. The overall design is exquisite and compact, with the ability to achieve a curved surface design on the optical window, seamlessly fitting the vehicle’s structure and appearance. It can easily be integrated into different positions, such as headlights and bumpers. In fact, we’re currently collaborating with a leading automotive glass manufacturer to explore new aesthetic and practical installation ideas for our LiDAR, including the co-development of a rear windshield installation solution featuring Robin-E.
  • Robin-W is the sibling of Robin-E. It’s a high-performance medium- to short-range wide-angle LiDAR and was designed with side and rear installations in mind. The stats are still fantastic. The standard detection range, for example, is 70 meters at 10%. That is twice the ranging capability of similar products in the market right now. The same thing with the resolution: 0.1° × 0.4°, which is significantly higher than the detection accuracy of similar products. This is important because the high resolution provides more accurate target recognition, ensuring clear visibility and enabling early identification and appropriate reactions. According to our calculations, in typical scenarios such as high-speed lane change, unprotected turns in urban areas, and vehicle parking, the detection distance in the side and rear directions needs to be at least 70 meters to meet safety requirements. Those considerations drive our design and inspire us to keep pushing further.


Robin-E – Image-grade Long Range Front-view LiDAR

Another thing that really sets us apart is that we don’t just design our sensors to have great specs and look great on paper – we design them to be easy to manufacture and, more importantly, easy for our partners to work with and design around. The power consumption, noise levels, and heat dissipation of the Robin line speak to this perfectly. Robin sensors –both Robin-E and Robin-W – have ultra-low power consumption –less than 10w. To put that in context, this is the first time the power consumption of automotive-grade LiDAR has been reduced to single digits. This is significantly lower than the mainstream products in the current forward-looking LiDAR market, and when paired with the low noise level (below 20 dBA) and integrated heat dissipation, Robin-E is a very friendly sensor to work with from an engineering and design integration perspective.


Robin-W – High-performance Mid-to-short Range Wide-FOV LiDAR

But then it gets better. The cherry on top of all this is our OmniVidi software platform. This is perception middleware that serves as a complete perception solution. It incorporates cutting-edge deep learning frameworks and provides a comprehensive toolchain – including algorithm model components, perception function suites, intelligent data, and quantitative evaluation. By combining various advanced technologies and traditional methods, such as the SightNet model for single LiDAR forward perception, lightweight fusion perception model for multiple LiDARs, spatiotemporal fusion 3D object detection and tracking algorithms, clustering, and Kalman filtering, it effectively reduces computational load. It also integrates both BEV and RV dual-mode detectors to strike a balance between real-time performance and accuracy, enabling efficient detection of surrounding objects and delivering outstanding perception capabilities.

PB: LiDAR has traditionally been seen as a high-cost component, especially those based on fiber lasers. What strategies or innovations is your company implementing to make LiDAR solutions more affordable and accessible to automotive manufacturers?

Let me say that I don’t think the true value of LiDAR has even been realized yet. It is an extremely powerful technology with so many potential applications and possible impacts down the road, only a handful of which are known at the moment.

That said, the price. I’ll start by noting that our first focus is, and always will be, the quality of our products – their precision, their performance, their reliability. That said, we are also always very cost-conscious and are continually looking to optimize our products and reduce costs where it makes sense.

We are actively trying to bring down the costs of producing our LiDAR and have already taken many active steps towards that goal. Building our production capabilities and supply chains from scratch meant we needed to invest a lot early on. But as volume increases and efficiencies of scale become evident, there will be significant room for cost reductions. This requires continuous expansion of the ecosystem and ongoing investments, but it also means that many components of 1550nm LiDAR that are currently expensive are beginning to come down in price as our technology advances and production scales up. With this, you will also see the price gap between 1550nm and 905nm LiDAR start shrinking, with our expectation being that the cost difference between the two will eventually be minimal, around 5% to 10%.

In addition to keeping an eye on current costs, we’ve also created and begun to implement a roadmap for meaningful cost reductions and concrete plans for achieving them. In the meantime, Innovusion is outperforming our relevant competition by a wide margin in terms of current performance and cost-effectiveness, and we’ll continue to focus on delivering valuable, full-lifecycle solutions to customers at prices they can afford.

PB: As LiDAR technology advances, how do you anticipate it will impact the overall design and aesthetics of vehicles? Will LiDAR sensors become more discreet and seamlessly integrated?

Innovusion is dedicated to helping our OEM partners achieve their design visions and providing them with the most flexible and least intrusive options for their designs. While this can mean offering higher-precision sensors that consume less power and generate less heat, it also means delivering products that seamlessly fit into the design visions and demands of our partners. To this end, the products we collaborate on with customers undergo intense customization to meet their specific design requirements. While the OEMs will typically take the lead in the design process, we then work closely with them to make those visions a reality. This often means there is a deep degree of customization that needs to happen with the physical design of LiDAR itself, like the curvature of the optical window or the size of the device.

What we really love about this collaborative process, though, is that it can also drive aesthetic design innovations that we are then able to leverage with other products or partners down the road. The power and spirit of this collaboration is perhaps best seen in the NIO ET7 being awarded a Red Dot design award for the smooth integration of LiDAR and other sensors into their autonomous driving capabilities. As the jury noted, ‘The reduced design of the NIO ET7 and the comprehensive use of smart technology merge into a harmonious overall appearance’.

In addition to working closely with our OEM partners to deliver on their vision, we’re also proactively driving the design possibilities of LiDAR through additional research partnerships with major auto suppliers like Fuyao and Wideye. These allow us to explore different installation positions and methods – like installation behind the windshield, within bumpers and headlights, or within the roofline – to better align with the overall aesthetic design of vehicles.

PB: What advancements or developments in LiDAR technology do you believe are necessary to overcome the limitations or challenges currently faced in automotive applications?

There are a number of key areas that we are actively focused on that we believe will help speed up the mass adoption of LiDAR across all manufacturers. First, there is the continuous progress and integration of laser detectors, which will further improve the performance and reduce the cost of LiDAR. In addition, the continuous improvement of LiDAR signal processing algorithms will greatly assist in vehicle control. Finally, as the capabilities of LiDAR continue to develop, there is a constant expansion of the definition of LiDAR usage scenarios and requirements. Currently, front-view lidar products can’t fully meet the demand. In the future, there will be a need for targeted products for new requirements and new scenarios, such as side-view LiDAR.

PB: LiDAR technology has traditionally relied on mechanical scanning systems, but solid-state LiDAR solutions are gaining attention. What advantages and challenges do you see in adopting solid-state LiDAR for automotive applications? How is Innovusion positioned regarding this transition towards solid-state LiDAR?

There has been a lot of buzz around solid-state LiDAR for years, and with good reason. Solid-state LiDAR technology holds the potential for high reliability, but the current technology is still immature and has challenges to solve regarding detection range and field of view. Once these are solved, manufacturing becomes the next challenge and will need to be proven at scale.

In contrast, Falcon, a leading hybrid solid-state LiDAR, is being produced at scale, all while meeting the stringent automotive-grade reliability standards of the industry and the demanding technical expectations of our partners. Solid-state LiDAR just can’t deliver in the same way today. Will solid-state get there and replace hybrid? We’ll see, but for the moment, hybrid solid-state is the best LiDAR technology available and the only one deliverable at scale to meet our partners’ demanding needs.

Moving forward, we will continue to evaluate and research a wide variety of LiDAR technologies and approaches and will always be dedicated to providing our customers with the best LiDAR and sensing technologies available. We’ll continue to select the technology route that we consider the most mature, suitable, and cost-effective based on practical usage scenarios.

PB: LiDAR technology often works in conjunction with other sensor systems like cameras and radar. How do you envision the synergy between LiDAR and these complementary technologies in enabling safer and more reliable autonomous driving systems?

We expect these three sensor technologies to coexist and complement each other for a long time. LiDAR technology itself plays an undeniably critical role in autonomous driving systems by providing high-precision 3D information that other sensors are unable to generate. But when paired with other sensor systems, such as cameras and radars, there are synergistic benefits to the overall safety and reliability of the system. For example, cameras provide high-resolution images for object and scene recognition. LiDAR, on the other hand, provides precise distance and 3D spatial information, detecting objects that may be challenging for cameras, such as pedestrians or obstacles in low-light conditions. However, through the collaboration of cameras and LiDAR, more comprehensive and accurate perception information can be obtained, and the surrounding environment can be perceived and understood more accurately and holistically. The result is that autonomous driving systems are able to make better and quicker decisions, leading to safer, more comfortable experiences for us, our friends, our families, and society as a whole.

PB: Looking ahead, what do you foresee as the next major breakthrough or innovation in LiDAR technology that will have a transformative impact on the automotive industry?

With LiDAR already being recognized as a necessary sensor for intelligent driving and with our Falcon sensors already in tens of thousands of cars on roads all around the world, we’re really excited about the transformative impacts that it is already enabling.

That said, we’re just getting started and think LiDAR is also. We’re actively exploring new technologies and approaches, and we’re really excited about some of the promising new technologies we’re working on. For example, while 1550nm and 905nm have been established as the de facto wavelength bands for emitted lasers, what happens when you explore higher bands? What characteristics of detected targets can be improved, and what limitations of current LiDAR systems can we mitigate? These are the kinds of questions we’re asking and the kind of research we’re conducting, all in the hopes of helping to spark the next big wave of innovations in LiDAR and sensing technology.

PB: Is there anything else you would like to add?

Thank you so much. We’re excited to help bring the power of intelligent vision to everyone and everything and can’t wait to see all the possibilities that will be unlocked as a result. With LiDAR, we think the future is bright, and we’re excited to be a part of such a dynamic and growing industry.

Related articles

The post Unleashing LiDAR’s Potential: A Conversation with Innovusion appeared first on Edge AI and Vision Alliance.

]]>
“Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI https://www.edge-ai-vision.com/2023/09/reinventing-smart-cities-with-computer-vision-a-presentation-from-hayden-ai/ Fri, 29 Sep 2023 08:00:16 +0000 https://www.edge-ai-vision.com/?p=44128 Vaibhav Ghadiok, Co-founder and CTO of Hayden AI, presents the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. Hayden AI has developed the first AI-powered data platform for smart and safe city applications such as traffic enforcement, parking and asset management. In this talk,… “Reinventing Smart Cities with Computer …

“Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI Read More +

The post “Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI appeared first on Edge AI and Vision Alliance.

]]>
Vaibhav Ghadiok, Co-founder and CTO of Hayden AI, presents the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. Hayden AI has developed the first AI-powered data platform for smart and safe city applications such as traffic enforcement, parking and asset management. In this talk,…

“Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI appeared first on Edge AI and Vision Alliance.

]]>
AI and the Road to Full Autonomy in Autonomous Vehicles https://www.edge-ai-vision.com/2023/09/ai-and-the-road-to-full-autonomy-in-autonomous-vehicles/ Thu, 28 Sep 2023 16:52:28 +0000 https://www.edge-ai-vision.com/?p=44150 The road to fully autonomous vehicles is, by necessity, a long and winding one; systems that implement new technologies that increase the driving level of vehicles (driving levels being discussed further below) must be rigorously tested for safety and longevity before they can make it to vehicles that are bound for public streets. The network …

AI and the Road to Full Autonomy in Autonomous Vehicles Read More +

The post AI and the Road to Full Autonomy in Autonomous Vehicles appeared first on Edge AI and Vision Alliance.

]]>
The road to fully autonomous vehicles is, by necessity, a long and winding one; systems that implement new technologies that increase the driving level of vehicles (driving levels being discussed further below) must be rigorously tested for safety and longevity before they can make it to vehicles that are bound for public streets. The network of power supplies, sensors, and electronics that is used for Advanced Driver Assistance Systems (ADAS) – features of which include emergency braking, adaptive cruise control, and self-parking systems – is extensive, with the effectiveness of ADAS being determined by the accuracy of the sensing equipment coupled with the accuracy and speed of analysis of the on-board autonomous controller.

The on-board analysis is where artificial intelligence comes into play and is a crucial element to the proper functioning of autonomous vehicles. In market research company IDTechEx‘s recent report on AI hardware at the edge of the network, “AI Chips for Edge Applications 2024 – 2034: Artificial Intelligence at the Edge“, AI chips (those pieces of semiconductor circuitry that are capable of efficiently handling machine learning workloads) are projected to generate revenue of more than USD$22 billion by 2034, and the industry vertical that is to see the highest level of growth over the next ten year period is the automotive industry, with a compound annual growth rate (CAGR) of 13%.


Circuitry and electrical components within a car, many of which work together to comprise ADAS.

The part that AI plays

The AI chips used by automotive vehicles are found in centrally located microcontrollers (MCUs), which are, in turn, connected to peripherals such as sensors and antennae to form a functioning ADAS. On-board AI compute can be used for several purposes, such as driver monitoring (where controls are adjusted for specific drivers, head and body positions are monitored in an attempt to detect drowsiness, and the seating position is changed in the event of an accident), driver assistance (where AI is responsible for object detection and appropriate corrections to steering and braking), and in-vehicle entertainment (where on-board virtual assistants act in much the same way as on smartphones or in smart appliances). The most important of the avenues listed above is the latter, driver assistance, as the robustness and effectiveness of the AI system determines the vehicle’s autonomous driving level.

Since its launch in 2014, the SAE Levels of Driving Automation (shown below) have been the most-cited source for driving automation in the automotive industry, which defines the six levels of driving automation. These range from level 0 (no driving automation) to level 5 (full driving automation). The current highest state of autonomy in the private automotive industry (incorporating vehicles for private use, such as passenger cars) is SAE Level 2, with the jump between level 2 and level 3 being significant, given the relative advancement of technology required to achieve situational automation.


The SAE levels of driving automation.

A range of sensors installed in the car – where those rely on LiDAR (Light Detection and Ranging) and vision sensors, among others – relay important information to the main processing unit in the vehicle. The compute unit is then responsible for analysing this data and making the appropriate adjustments to steering and braking. In order for processing to be effective, the machine learning algorithms that the AI chips employ must be extensively trained prior to deployment. This training involves the algorithms being exposed to a great quantity of ADAS sensor data, such that by the end of the training period they can accurately detect objects, identify objects, and differentiate objects from one another (as well as objects from their background, thus determining the depth of field). Passive ADAS is where the compute unit alerts the driver to necessary action, either via sounds, flashing lights, or physical feedback. This is the case in reverse parking assistance, for example, where proximity sensors alert the driver to where the car is in relation to obstacles. Active ADAS, however, is where the compute unit makes adjustments for the driver. As these adjustments occur in real time and need to account for varying vehicle speeds and weather conditions, it is of great importance that the chips that comprise the compute unit are able to make calculations quickly and effectively.

A scalable roadmap


The trends for automotive SoCs is for performance to increase with each year while node process moves toward the most leading edge.

SoCs for vehicular autonomy have only been around for a relatively short amount of time, yet it is clear that there is a trend towards smaller node processes, which aid in delivering higher performance. This makes sense logically, as higher levels of autonomy will necessarily require a greater degree of computation (as the human computational input is effectively outsourced to semiconductor circuitry). The above graph collates the data of 11 automotive SoCs, one of which was released in 2019, while others are scheduled for automotive manufacturers’ 2024 and 2025 production lines. Among the most powerful of the SoCs considered are the Nvidia Orin DRIVE Thor, which is expected in 2025, where Nvidia is asserting a performance of 2000 Trillion Operations Per Second (TOPS), and the Qualcomm Snapdragon Ride Flex, which has a performance of 700 TOPS and is expected in 2024.

Moving to smaller node sizes requires more expensive semiconductor manufacturing equipment (particularly at the leading edge, as Deep Ultraviolet and Extreme Ultraviolet lithography machines are used) and more time-consuming manufacture processes. As such, the capital required for foundries to move to more advanced node processes proves a significant barrier to entry to all but a few semiconductor manufacturers. This is a reason that several IDMs are now outsourcing high-performance chip manufacture to those foundries already capable of such fabrication.

In order to keep costs down for the future, it is also important for chip designers to consider the scalability of their systems, as the stepwise movement of increasing autonomous driving level adoption means that designers that do not consider scalability at this juncture run the risk of spending more for designs at ever-increasing nodes. Given that 4 nm and 3 nm chip design (at least for the AI accelerator portion of the SoC) likely offers sufficient performance headroom up to SAE Level 5, it behooves designers to consider hardware that is able to adapt to handling increasingly advanced AI algorithms.

It will be some years until we see cars on the road capable of the most advanced automation levels proposed above, but the technology to get there is already gaining traction. The next couple of years, especially, will be important ones for the automotive industry.

Report coverage

IDTechEx forecasts that the global AI chips market for edge devices will grow to US$22.0 billion by 2034, with AI chips for automotive accounting for more than 10% of this figure. IDTechEx’s report gives analysis pertaining to the key drivers for revenue growth in edge AI chips over the forecast period, with deployment within the key industry verticals – consumer electronics, industrial automation, and automotive – reviewed. Case studies of automotive players’ leading system-on-chips (SoCs) for ADAS are given, as are key trends relating to performance and power consumption for automotive controllers.

More generally, the report covers the global AI Chips market across eight industry verticals, with 10-year granular forecasts in six different categories (such as by geography, by chip architecture, and by application). IDTechEx’s report “AI Chips for Edge Applications 2024 – 2034: Artificial Intelligence at the Edge” answers the major questions, challenges, and opportunities the edge AI chip value chain faces. For further understanding of the markets, players, technologies, opportunities, and challenges, please refer to the report.

For more information on this report, please visit www.IDTechEx.com/EdgeAI, or for the full portfolio of AI research available from IDTechEx please visit www.IDTechEx.com/Research/AI.

About IDTechEx

IDTechEx guides your strategic business decisions through its Research, Subscription and Consultancy products, helping you profit from emerging technologies. For more information, contact research@IDTechEx.com or visit www.IDTechEx.com.

The post AI and the Road to Full Autonomy in Autonomous Vehicles appeared first on Edge AI and Vision Alliance.

]]>
What is the Role of Multi-camera Solutions in Surround-view Systems? https://www.edge-ai-vision.com/2023/09/what-is-the-role-of-multi-camera-solutions-in-surround-view-systems/ Mon, 25 Sep 2023 14:21:16 +0000 https://www.edge-ai-vision.com/?p=44084 This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. The rise of multi-camera systems has helped traditional automobiles and autonomous vehicles to eliminate blind spots with a comprehensive view of their surroundings. Discover how surround-view systems work and unearth insights on all the considerations …

What is the Role of Multi-camera Solutions in Surround-view Systems? Read More +

The post What is the Role of Multi-camera Solutions in Surround-view Systems? appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems.

The rise of multi-camera systems has helped traditional automobiles and autonomous vehicles to eliminate blind spots with a comprehensive view of their surroundings. Discover how surround-view systems work and unearth insights on all the considerations for picking the ideal multi-camera setup.

Safety remains a top priority, especially for large vehicles and fleets. Blind spots have been a persistent challenge, leading to numerous accidents and collisions. However, with the rapid advancement of multi-camera systems, it becomes possible to eliminate these blind spots and create a 180-degree or 360-degree view of a vehicle’s surroundings. Not only does this technology benefit traditional automobiles, but it also plays a vital role in developing autonomous vehicles, such as autonomous tractors and patrol robots.

In this blog, you’ll understand the mechanics of surround-view systems, their benefits, and what factors to consider when choosing a multi-camera setup.

How multi-camera solutions work in surround-view systems

Integrating multiple cameras strategically placed around the vehicle creates a real-time panoramic view, giving drivers an unprecedented view of their surroundings. This comprehensive visibility empowers them to make more informed decisions, easily navigate tight spaces, and significantly reduce the probability of collisions.

Moreover, surround-view systems are not limited to conventional vehicles alone. Autonomous tractors and patrol robots, which operate without human intervention, rely heavily on surround-view systems to ensure their safety and the environment they interact with.

Each camera plays a pivotal role in capturing data from its designated angle, and all the feeds are synchronized to deliver a seamless, coherent view. It ensures that the driver or autonomous algorithms receive accurate, real-time information about their surroundings, enabling them to make split-second decisions that majorly impact safety.

Hardware synchronization and processing capability

Surround-view systems rely on a hardware-synchronized multi-camera setup to achieve an accurate and real-time view. In this case, image capture is initiated in all the cameras simultaneously through a hardware trigger like an external PWM signal. So, they have the same frame start – ensuring perfect alignment of the frames.

The synchronization is crucial because it ensures that all camera feeds are captured at the same instant, eliminating any discrepancies that may arise from time delays. This synchronization is achieved through sophisticated hardware and communication protocols, guaranteeing that the data from different cameras align perfectly.

Furthermore, processing capability is equally vital in a surround-view system. The sheer volume of data generated by multiple cameras demands a robust platform capable of handling the desired throughput efficiently. Advanced processors, GPUs, or specialized hardware units are employed to rapidly process the incoming data and stitch together the different camera feeds to form a seamless panoramic view.

What to ask while picking a multi-camera setup for your surround-view system

When choosing the best-fit multi-camera setup for your surround-view system, several questions must be carefully answered. These include:

How many cameras are required?

The first step in designing a surround-view system is determining the appropriate number of cameras. These systems can vary from having 3 to 10 cameras, depending on the specific requirements of the vehicle or device. Larger vehicles with more complex blind spots may necessitate a higher number of cameras for complete coverage. The desired video stream quality also influences the number of cameras, as a higher count can deliver a clearer view with more details and reduced lens distortions.

Which type of camera interface will be used?

The Gigabit Multimedia Serial Link (GMSL) interface is highly recommended for surround-view systems. GMSL enables high-resolution data transfer at high frame rates, facilitating real-time surround-view capabilities. In some cases, when cameras are placed relatively close to the host processor (1-2 meters), a USB interface can be considered. However, for optimal reliability, GMSL stands out as the preferred choice.

What is the host platform?

The host platform, typically consisting of processors and graphics units, plays a crucial role in handling the incoming data from multiple cameras. The number of camera pipelines must be taken into account when selecting the host platform. High-performance processors like NVIDIA Jetson AGX Xavier or AGX Orin are recommended for efficient processing and seamless integration of the surround-view system.

What is the synchronization method?

Achieving a cohesive 360-degree view requires all cameras to align perfectly with each other. This synchronization is essential for creating a seamless panoramic view. Hardware-based synchronization is the preferred method, as it ensures frame-level synchronization, which is crucial for accurately merging camera feeds. Software-based synchronization might not guarantee this level of precision, leading to potential discrepancies.

How about the latency level?

Latency is a critical factor, especially in real-time applications like surround-view systems. Whether for the driver of a conventional vehicle or an AI algorithm in an autonomous vehicle, minimal glass-to-glass latency is essential for prompt navigational decisions. Low latency helps avoid accidents and ensures the system can respond swiftly to changes in the surroundings.

e-con Systems’ multi-camera solutions for surround-view systems

e-con Systems offers unique multi-camera solutions that help surround-view systems see better, think smarter, and act faster. For instance, one of our latest releases is STURDeCAM31 – a game-changer for Advanced Driver Assistance Systems (ADAS). It comes with GMSL2 capability so that cameras can be positioned in a way that ensures long-distance support. Its integrated coaxial cables make the transmission of large video outputs easier. STURDeCAM31 also comes with the LFM feature to capture images seamlessly with pulsed light sources like LEDs.

Learn more about STURDeCAM31

Explore our Synchronized Multi-Camera Solutions page or visit our Camera Selector page to see our complete portfolio.

If you want more information on how we can help integrate powerful multi-camera systems into your applications, please contact us at camerasolutions@e-consystems.com.

Suresh Madhu
Product Marketing Manager, e-con Systems

The post What is the Role of Multi-camera Solutions in Surround-view Systems? appeared first on Edge AI and Vision Alliance.

]]>
The Global Market for Lidar in Autonomous Vehicles Will Grow to US$8.4 Billion by 2033 https://www.edge-ai-vision.com/2023/09/the-global-market-for-lidar-in-autonomous-vehicles-will-grow-to-us8-4-billion-by-2033/ Thu, 14 Sep 2023 11:47:18 +0000 https://www.edge-ai-vision.com/?p=43823 The demand for lidars to be adopted in the automotive industry drives huge investment and rapid progression, with innovations in beam steering technologies, performance improvement, and cost reduction in lidar transceiver components. These efforts can enable lidars to be implemented in a wider application scenario beyond conventional usage and automobiles. However, the rapidly evolving lidar …

The Global Market for Lidar in Autonomous Vehicles Will Grow to US$8.4 Billion by 2033 Read More +

The post The Global Market for Lidar in Autonomous Vehicles Will Grow to US$8.4 Billion by 2033 appeared first on Edge AI and Vision Alliance.

]]>
The demand for lidars to be adopted in the automotive industry drives huge investment and rapid progression, with innovations in beam steering technologies, performance improvement, and cost reduction in lidar transceiver components. These efforts can enable lidars to be implemented in a wider application scenario beyond conventional usage and automobiles.

However, the rapidly evolving lidar technologies and markets leave many uncertain questions to answer. The technology landscape is cluttered with numerous options for every component in a lidar system.

In the report “Lidar 2023-2033: Technologies, Players, Markets & Forecasts”, experts at IDTechEx have identified four important technology choices that every lidar player and lidar user must make: measurement process, laser, beam steering mechanism, and photodetector.  Dr Xiaoxi He, IDTechEx Research Director and lead author of the report, comments, “The technology choices made today will have immense consequences for performance, price, and scalability of lidar in the future. The present state of the lidar market is unsustainable because winning technologies and players will inevitably emerge, consolidating the technology and business landscapes.”

IDTechEx research in “Lidar 2023-2033: Technologies, Players, Markets & Forecasts” finds that the global market for 3D lidar in automotive will grow to US$8.4 billion by 2033.

The report presents an unbiased analysis of primary data gathered via interviews with key players and builds on IDTechEx’s expertise in the transport, electronics, and photonics sectors. While the market analysis and forecasts focus on the automotive industry, the technology analysis and company profiles also cover lidar for industrial automation, robotics, smart city, security, and mapping. For more information on lidar, including downloadable report sample pages, please visit www.IDTechEx.com/Lidar.

About IDTechEx

IDTechEx guides your strategic business decisions through its Research, Subscription and Consultancy products, helping you profit from emerging technologies. For more information, contact research@IDTechEx.com or visit www.IDTechEx.com.

The post The Global Market for Lidar in Autonomous Vehicles Will Grow to US$8.4 Billion by 2033 appeared first on Edge AI and Vision Alliance.

]]>
Autonomous Vehicles Will Soon Be Safer Than Humans, and Some Already Are https://www.edge-ai-vision.com/2023/09/autonomous-vehicles-will-soon-be-safer-than-humans-and-some-already-are/ Tue, 12 Sep 2023 15:49:49 +0000 https://www.edge-ai-vision.com/?p=43711 The promise of autonomous vehicles has been a long time coming. While many are still waiting to see the fruits of all this work, there are some cities like Arizona and San Francisco where autonomous cars are starting to become a reality. Furthermore, IDTechEx’s new industry report “Autonomous Cars, Robotaxis and Sensors 2024-2044” predicts a …

Autonomous Vehicles Will Soon Be Safer Than Humans, and Some Already Are Read More +

The post Autonomous Vehicles Will Soon Be Safer Than Humans, and Some Already Are appeared first on Edge AI and Vision Alliance.

]]>
The promise of autonomous vehicles has been a long time coming. While many are still waiting to see the fruits of all this work, there are some cities like Arizona and San Francisco where autonomous cars are starting to become a reality. Furthermore, IDTechEx’s new industry report “Autonomous Cars, Robotaxis and Sensors 2024-2044” predicts a coming rapid growth in the number of cities that will offer robotaxi services in the next few years. So, with robotaxis rapidly becoming an everyday reality, the industry and experts must ask, are autonomous robotaxis safe enough?


Miles per disengagement is used as a proxy measure of safety and performance. “Best 3” is the average performance of each year’s top 3 performing companies.

This summer, the robotaxi industry has seen more commercialization activity, with both Waymo and Cruise being given the green light by the California Public Utilities Commission (CPUC) to expand their commercial services in San Francisco. But only weeks after that announcement, San Francisco has seen protests around the deployment of autonomous vehicles, and California DMV has halved the number of vehicles that Cruise is permitted to have in testing. Some inhabitants of San Francisco are becoming disenfranchised with the city’s perpetual status as a proving ground for this technology, with a group called Safe Street Rebel leading the protests. Their disruption mechanism is called coning and involves placing a traffic cone on the bonnet of autonomous vehicles, rendering it inoperable until the cone is removed — a somewhat embarrassing situation considering all the vehicles’ technology.

So, are autonomous vehicles really that unsafe and not ready for the news, or is this protest more about the city’s technology testbed status? Waymo claims on its website that it outperforms human drivers when mitigating and avoiding collisions, but what does the data out of California say?

Autonomous vehicle safety is an area that IDTechEx’s autonomous vehicle experts have tracked closely and carefully as autonomous car testing has proliferated. IDTechEx uses data from the California DMV to understand how autonomous vehicles are performing and improving over the years. When assessing the safety of autonomous vehicles, several metrics can be considered: how many testing miles has each company amassed, how often does the safety driver need to intervene with the autonomous system, and how often does the autonomous system cause a crash?

A key metric that IDTechEx uses to monitor autonomous vehicle safety is miles per disengagement. This measures how frequently, or hopefully how infrequently, the autonomous vehicle safety driver needs to intervene with the autonomous system. IDTechEx has measured this since 2015 and has seen exponential growth in the performance of autonomous vehicles. Back in 2015, Waymo recorded 424,000 miles of autonomous testing, during which its safety drivers disengaged the system 341 times, meaning there was an average of approximately 1,200 miles between disengagements. Waymo were the best company by this metric that year. For reference, IDTechEx estimates that human drivers in the US average approximately 200,000 miles between collisions. If it is assumed that each of Waymo’s disengagements would lead to a collision, which is slightly unfair against the autonomous driver, then it would be around 0.5% as safe as a human driver.

However, the autonomous vehicle industry has made significant progress since then. In fact, IDTechEx has since the number of miles per disengagement nearly doubled year on year.


The number of testing miles submitted by the top testing companies in California between 2015 and 2022.

In 2022, Cruise were the leader when it came to disengagement performance, with a score of nearly 96,000 miles per disengagement, nearly 50% as safe as humans. During its 863,000 miles of testing, safety drivers only needed to intervene with the system nine times. As part of IDTechEx’s research in “Autonomous Cars, Robotaxis and Sensors 2024-2044”, IDTechEx looks closely at the disengagements and collisions in which autonomous vehicles are involved. Doing so uncovers a surprising fact: four out of the nine disengagements were caused by the poor performance of other nearby drivers. If these are removed from the equation, then Cruise’s miles per disengagement score shoots up to over 170,000, 85% of the way to the rate at which humans have collisions.

Miles per disengagement is only a proxy for autonomous vehicle safety, though. Since a safety driver has intervened, it is impossible to know whether the car would have collided or not. Instead, perhaps the number of collisions that autonomous vehicles are involved in should be considered.

Between January 2019 and May 2023, the autonomous vehicle companies testing across California submitted more than 450 collision reports. These reports cover a wide range of collision types, from collisions with other vehicles to hitting curbs and even the vehicles being attacked by pedestrians. As part of IDTechEx’s research, its analysts have read and analyzed each of these reports, finding that only 3.4% of collisions could be attributed to the poor performance of the autonomous system. Another way to look at it is that in 2022, the autonomous driver would cause collisions at a rate of 1 collision per 1.3 million miles, significantly better than human drivers. But this is with a human behind the wheel monitoring the system. What about when the system has no human safety net? How much do they collide then?

Since 2020, California has allowed driverless autonomous testing on its streets, and two companies have taken advantage of this. Waymo and Cruise. Between 2021 and 2022, Waymo has recorded just under 70,000 miles of driverless activity. On the other hand, Cruise only started recording driverless miles in 2022 but submitted a staggering 590,000 miles. During those miles, the vehicles were involved in 15 collisions, i.e., 1 collision every ~40,000 miles, or 5 times more often than their human counterparts.

One point of redemption is that these miles were exclusively accumulated in San Francisco, one of the toughest driving environments in the US for autonomous systems. But also tough for humans. With the slower speeds and increased pedestrian presence, IDTechEx estimates that the collision rate amongst human drivers increases from one per ~200,000 miles (the US average across all road types) to one in every 107,000 miles, only half as good, but still better than autonomous drivers.

There is one other statistic that should be considered when talking about the safety performance of autonomous vehicles. Of those 450+ collisions recorded by the companies testing autonomous cars, none involved a major injury or death. In the 4 years of testing, from 2019 to 2022, that is nearly 14 million miles without a serious injury or fatality. NHTSA say that with human drivers, a fatality happens roughly once per 75 million miles of human driving. So autonomous vehicles still have a way to go to catch up, but it is looking promising.

Whether you look at miles per disengagement, miles per collision, or miles per fatality, humans still have a better track record than autonomous vehicles. However, human safety has been fairly stagnant. The rate at which we crash is not changing that much, and further improvement is mostly coming from crash mitigation technology, such as automatic emergency braking systems and blind spot detection. One thing that can be said for autonomous vehicles is that their safety has been improving at somewhat of an exponential rate. Something that humans are very unlikely to mimic. IDTechEx does not believe that autonomous vehicles are as safe as humans yet, nor are they ready for widespread unsupervised deployment. The rate of improvement that autonomous technologies have shown demonstrates that there is the potential for them to far exceed human levels of safety in the future, leading us toward a world in which we stop questioning whether autonomous cars are ready and start questioning whether human drivers are safe enough.

To find out more about the IDTechEx report “Autonomous Cars, Robotaxis and Sensors 2024-2044”, including downloadable sample pages, please visit www.IDTechEx.com/autonomouscars.

Dr James Jeffs
Senior Technology Analyst, IDTechEx

About IDTechEx

IDTechEx guides your strategic business decisions through its Research, Subscription and Consultancy products, helping you profit from emerging technologies. For more information, contact research@IDTechEx.com or visit www.IDTechEx.com.

The post Autonomous Vehicles Will Soon Be Safer Than Humans, and Some Already Are appeared first on Edge AI and Vision Alliance.

]]>
Mark AB Capital Partners with Blaize In an Exclusive Relationship to Set Up a State-of-the-art Facility in Abu Dhabi to Provide Sustainable Edge AI Solutions for UAE https://www.edge-ai-vision.com/2023/09/mark-ab-capital-partners-with-blaize-in-an-exclusive-relationship-to-set-up-a-state-of-the-art-facility-in-abu-dhabi-to-provide-sustainable-edge-ai-solutions-for-uae/ Tue, 05 Sep 2023 15:50:47 +0000 https://www.edge-ai-vision.com/?p=43513 Initial contracts to deliver sustainable Smart Cities and Airport solutions projected to generate a minimum of $50m in orders annually EL DORADO HILLS, CA — September 5, 2023 — Mark AB Capital today announced a multi-year Memorandum of Understanding with Blaize – the leader of new-generation supercomputing. Blaize will offer a comprehensive AI edge hardware …

Mark AB Capital Partners with Blaize In an Exclusive Relationship to Set Up a State-of-the-art Facility in Abu Dhabi to Provide Sustainable Edge AI Solutions for UAE Read More +

The post Mark AB Capital Partners with Blaize In an Exclusive Relationship to Set Up a State-of-the-art Facility in Abu Dhabi to Provide Sustainable Edge AI Solutions for UAE appeared first on Edge AI and Vision Alliance.

]]>
Initial contracts to deliver sustainable Smart Cities and Airport solutions projected to generate a minimum of $50m in orders annually

EL DORADO HILLS, CA — September 5, 2023 — Mark AB Capital today announced a multi-year Memorandum of Understanding with Blaize – the leader of new-generation supercomputing. Blaize will offer a comprehensive AI edge hardware and software development platform implemented on Blaize edge solutions optimized for sustainability and operational efficiencies. Blaize will also create an AI Data Center powered by its fully programmable Blaize Graph Streaming Processor, providing the lowest Total Cost of Ownership.

Blaize will develop applications that cost-effectively offer real-time monitoring of video and data-enabled IoT technologies and connected sensors, providing valuable insights to enhance security and improve the lives of UAE citizens. This will help UAE better manage its massive smart city initiatives in essential areas, such as mobility, connected city municipalities, public safety and productivity, transportation patterns, and remote maintenance of critical infrastructure elements. Initial sectors targeted for decarbonization with AI include AgTech and Healthcare solutions, water and electrical systems management, automating hazard detection, extensive Smart City applications, airside and landside airport initiatives, and massive infrastructure safety and maintenance across the UAE.

The UAE is building its future economy based on knowledge and innovation while nurturing and protecting the environment. The Blaize tightly coupled software, small form factor, low power, and high-speed data processing hardware deliver an end to-end efficient, usable AI edge workflow that will power UAE’s transition to a smart, more connected environment and future economy, including the efficient use of a Blaize powered data center.

“We are looking forward to partnering with Blaize and believe that embracing the power of AI is not just about innovation; it’s about building a brighter future together. With this partnership, we’re not only harnessing AI’s potential, but also paving the way towards an AI-powered nation where progress knows no bounds,” said Abdullah Mohamed Al Qubaisi, CEO of Mark AB Capital.

Blaize and Mark AB will bring AI technology to the $20b GCC (Gulf Cooperation Council), expecting to generate $50m in orders annually over several years. Blaize will work with Mark AB to create an AI software training center to certify at least 5,000 UAE citizens utilizing Blaize AI Studio™ – the code-free AI software platform, creating high-paying AI development jobs supporting UAE’s local employment initiatives. Mark AB and Blaize will work together to make UAE the world’s first total edge AI nation, allowing full use of AI to create efficiency in energy, security, and education while building its future economy.

“UAE is focused on optimizing city functions and promoting the economic growth of its municipalities while improving the safety and quality of life for its citizens. Blaize is delighted to deliver intelligence at the edge of everywhere with a programmable full-stack AI architecture to create use cases that positively impact people’s lives and help a nation like UAE achieve its sustainability goals,” said Dinakar Munagala, CEO and Co-founder of Blaize.

About Mark AB

Mark AB Capital is a forward-thinking investment firm with a focus on strategic partnerships that drive innovation and progress. Committed to shaping the future, Mark AB Capital seeks opportunities that align with its vision for a better world. www.markabcapital.net

About Blaize

Blaize is a leading provider of a proprietary purpose-built, full-stack hardware architecture and low-code/no code software platform that enables edge AI processing solutions at the network’s edge for computing in multiple large and rapidly growing markets — automotive, mobility, retail, security, industrial automation, medical devices, and many others. Blaize’s novel solution solves the technical problem that edge AI processing requires across those verticals — very low latency and high thermal and power efficiency — which previously relied on retrofitting sub-optimized AI solutions designed more for data centers and the cloud. Blaize has previously raised over $200MM from strategic investors such as DENSO, Daimler, Magna, Samsung, and Bess Ventures and financial investors such as Franklin Templeton, Temasek, GGV, and others. With headquarters in El Dorado Hills (CA), Blaize has teams in San Jose (CA), North Carolina, and subsidiaries in Hyderabad (India), and Leeds and Kings Langley (UK), with 200+ employees worldwide. www.blaize.com. Follow Blaize on X (@blaizeinc) and LinkedIn (Blaize).

The post Mark AB Capital Partners with Blaize In an Exclusive Relationship to Set Up a State-of-the-art Facility in Abu Dhabi to Provide Sustainable Edge AI Solutions for UAE appeared first on Edge AI and Vision Alliance.

]]>
LiDAR Systems for the Automotive Industry: TRIOPTICS’ Measurement Technology Enables Large-scale Production https://www.edge-ai-vision.com/2023/09/lidar-systems-for-the-automotive-industry-trioptics-measurement-technology-enables-large-scale-production/ Mon, 04 Sep 2023 14:04:52 +0000 https://www.edge-ai-vision.com/?p=43491 This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. Alongside camera and radar, LiDAR sensors are among the key technologies for highly automated, fully automated, and autonomous driving. Together with camera and radar sensors, the LiDAR sensors perceive the surroundings, detect …

LiDAR Systems for the Automotive Industry: TRIOPTICS’ Measurement Technology Enables Large-scale Production Read More +

The post LiDAR Systems for the Automotive Industry: TRIOPTICS’ Measurement Technology Enables Large-scale Production appeared first on Edge AI and Vision Alliance.

]]>
This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group.

Alongside camera and radar, LiDAR sensors are among the key technologies for highly automated, fully automated, and autonomous driving. Together with camera and radar sensors, the LiDAR sensors perceive the surroundings, detect obstacles, measure distances, and thus ensure safety in road traffic.

One key advantage of LiDAR sensor technology is that it provides both real-time image understanding and environmental sensing capabilities. This technology offers a much more comprehensive and detailed image of the entire vehicle environment – during both day and nighttime – and provides essential information for object detection and collision avoidance, even in adverse weather conditions.

TRIOPTICS and Yole Intelligence share today their expertise on the LiDAR market and technology to give a better understanding of LiDAR manufacturing issues. Frederike Norda Dehn, Product Marketing Manager at TRIOPTICS, and Pierrick Boulay, Senior Technology and Market Analyst in the Photonics and Sensing Division at Yole Intelligence, part of Yole Group, offer you a snapshot of this industry and the key manufacturing steps.

Market figures & trends are extracted from the LiDAR for Automotive report, 2023 edition, from Yole Intelligence.

“The LiDAR market is still small compared to the booming camera market, but forecasts show that the production volume for LiDAR systems is starting to grow. By 2023, 600,000 vehicles are expected to be equipped with LiDAR. These LiDAR products come mainly from manufacturers such as Valeo, Innovusion, Hesai, RoboSense, or Huawei, which are already producing LiDAR systems in initial series.”
Pierrick Boulay
Senior Technology and Market Analyst, Yole Intelligence (part of the Yole Group)

Other LiDAR systems, such as those from Luminar or Innoviz, are still in the development phase. But the market is gaining momentum. In 2024, the one million mark is likely to be exceeded by far due to the rapid development in China. By Q3 2023, 36 Chinese OEMs will have launched or will soon launch vehicles with one or more LiDAR systems. The ten-million-unit mark could be reached by the end of the decade. As a result of this rapid development and increase in production volumes, manufacturers are faced with the challenge of developing and implementing new automated production and testing solutions for LiDAR modules. This is where the expertise of TRIOPTICS, the specialist in optical metrology, comes in.

TRIOPTICS’ equipment achieves maximum output and production speed, enabling the production of over one million sensors per year. This means a fully automated production line with pre-testing of the components, a sensor-specific alignment process of the optical elements, and end-of-line (EOL) testing of the final product. The entire system meets the requirements of the automotive industry for the highest OEE (Overall Equipment Effectiveness) in terms of 24/7 production with maximum availability, as well as traceability of production data and connectivity to the MES (Manufacturing Execution System).

“These advantages recently convinced a leading global automotive supplier of driver assistance systems to commission a lighthouse project at TRIOPTICS: The construction of an automated line for the production of solid-state LiDAR modules that work according to the time-of-flight principle.”
Frederike Norda Dehn
Product Marketing Manager, TRIOPTICS

At the project’s beginning, the customer brought a developed prototype and a manual laboratory set-up to produce the LiDAR system. The prototyping process now needed to be transferred to automated series production with the highest precision and production speed with the help of TRIOPTICS.


LiDAR module: optical alignment of optic and sensor – Courtesy of TRIOPTICS, 2023

The production line – which was then developed based on the specifications – includes numerous innovative processes and stations:

  • Loading with trays: emitter & receiver PCB (Printed Circuit Board) with housing, as well as objective lenses,
  • Barcode scanning for traceability,
  • Plasma cleaning,
  • Glue dispensing,
  • Optical alignment,
  • UV curing of the glue,
  • Oven curing.


Alignment station for automotive LiDAR sensors – Courtesy of TRIOPTICS, 2023

In addition to the assembly processes, the production system also covers necessary end-of-line tests that check the performance and quality of the manufactured LiDAR components:

  • Testing of the lens position,
  • Simulation of an object distance at 100 meters to check the accuracy of the beam path from emitter to receiver,
  • Lens positioning test to the emitter and receiver chip respectively,
  • Walk error check by means of a test chart showing three different levels of reflectivity.

The result is a perfectly aligned sensor that is manufactured and fully tested in a small space – ready for use in state-of-the-art automotive vehicles. Conclusion: Whether it concerns R&D, prototyping, production, or the ramp-up phase, an intelligent alignment process with uniform, defined, and traceable parameters enables TRIOPTICS to accompany its customers from the laboratory to mass production. TRIOPTICS offers perfectly coordinated equipment for every phase. From phase to phase, the alignment process established at the beginning can continue to be used for production, saving both time and money.


Yole Group invites you to follow the evolution of the LiDAR industry throughout the year with dedicated analyses, articles, and interviews with industrials, as well as events and product launches.

Yole Group’s analysts are very glad to meet industrials, establish new contacts, and help drive their LiDAR business forward. Come and meet the analysts and look through the latest market, technology, reverse engineering and reverse costing analyses. Send us your request for a meeting during key tradeshows and conferences at events@yolegroup.com.

Stay tuned on yolegroup.com.

The post LiDAR Systems for the Automotive Industry: TRIOPTICS’ Measurement Technology Enables Large-scale Production appeared first on Edge AI and Vision Alliance.

]]>
Edge AI: The Wait is (Almost) Over https://www.edge-ai-vision.com/2023/08/edge-ai-the-wait-is-almost-over/ Wed, 30 Aug 2023 12:16:31 +0000 https://www.edge-ai-vision.com/?p=43459 Since the introduction of Artificial Intelligence to the data center, AI has been loath to leave it. With large tracts of floorspace dedicated to servers comprising leading-edge chips that can handle the computational demands for training the latest in AI models, as well as inference via end-user connections to the cloud, data centers are the …

Edge AI: The Wait is (Almost) Over Read More +

The post Edge AI: The Wait is (Almost) Over appeared first on Edge AI and Vision Alliance.

]]>
Since the introduction of Artificial Intelligence to the data center, AI has been loath to leave it. With large tracts of floorspace dedicated to servers comprising leading-edge chips that can handle the computational demands for training the latest in AI models, as well as inference via end-user connections to the cloud, data centers are the ideal environment for facilitating much of what AI has to offer. And yet, over the past decade, AI has pushed steadily at the boundaries of the cloud computing environment in a bid to infiltrate the realm beyond.

The edge of the network, where users have immediate interaction with devices that do not necessarily rely on the cloud for computation, has been touted as something of the promised land for AI, where inclusion of accurate and somewhat autonomous AI – where devices are linked via Wi-Fi connectivity – would enable a true Internet of Things. This has been the expectation for the best part of the decade, with the great Edge AI takeover still forthcoming. Instead, AI has slowly trickled into certain household devices and consumer electronics goods, with other applications yet to realize the full impact that AI has promised.

In a recently released IDTechEx report, “AI Chips for Edge Applications 2024-2034: Artificial Intelligence at the Edge“, market research company IDTechEx notes that the production rollout of technology being developed by a number of AI chip start-ups targeting edge applications will see AI at the edge continue to grow substantially over the next ten years…albeit not with the type of exponential growth that a ‘boom’ would suggest.


Revenue growth of AI chips used for edge applications does not progress at a constant rate, due to the competing maturation of certain markets (such as for smartphones) against the adoption of AI in others (such as automotive).

AI in Smartphones Headed Towards Saturation; Automotive Just Getting Started

The reasons behind the unconventional growth are multiple, but they boil down to falling under two categories: the first being the saturation and stop-start nature of certain markets that have already employed AI architectures in their incumbent chipsets, and the second being where rigorous testing is necessary prior to high volume rollout of AI hardware. Under the first category, a key example is the smartphone market, which has already begun to saturate. However, premiumization of smartphones continues (where the percentage share of total smartphones sold given over to premium smartphones is, year-on-year, increasing), where AI revenue increases as more premium smartphones are sold given that these smartphones incorporate AI coprocessing in their chipsets, it is expected that this will itself begin to saturate over the next ten years.

Under the second category, flagship automotive-grade System-on-Chips (SoCs) for Advanced Driver-Assistance Systems (ADAS) from the likes of Renesas, Qualcomm, and Mobileye are all planned to hit automotive manufacturers’ 2024/25 production lines. These systems allow for a minimum driving automation level (officially known as SAE level) 3, allowing for situational automation where driver input is not necessary in certain situations. Further scaling of technology after rigorous testing will allow for further checkpoints in driving automation to be reached, with the adoption of increasing automation to occur in stages.


The SAE levels of driving automation.

Only a Matter of Time Now

Though the types of models that are employed at the edge will be, in the main, much simpler than those handled within data centers, due to the power constraints of edge devices, it is only a matter of time before even the simplest of AI functions – such as hands-free activation and actioning – comes as an added feature to a range of devices, particularly within the home. IDTechEx have identified the Smart Home as one of the main beneficiaries of AI technology, with the potential to transform how we live and interact with our immediate surroundings.

IDTechEx Report Coverage

IDTechEx forecasts that the global AI chips market for edge devices will grow to US$22.0 billion by 2034. IDTechEx’s report gives analysis pertaining to the key drivers for revenue growth in edge AI chips over the forecast period, with deployment within the key industry verticals – consumer electronics, industrial automation, and automotive – reviewed. More generally, the report covers the global AI Chips market across eight industry verticals, with 10-year granular forecasts in six different categories (such as by geography, by chip architecture, and by application).

IDTechEx’s brand new report, “AI Chips for Edge Applications 2024-2034: Artificial Intelligence at the Edge”, answers the major questions, challenges and opportunities faced by the edge AI chip value chain. The report offers an understanding of the markets, players, technologies, opportunities, and challenges. For more information on the report, including downloadable sample pages, please visit www.IDTechEx.com/EdgeAI, or for the full portfolio of AI research available from IDTechEx, please visit www.IDTechEx.com/Research/AI.

About IDTechEx

IDTechEx guides your strategic business decisions through its Research, Subscription and Consultancy products, helping you profit from emerging technologies. For more information, contact research@IDTechEx.com or visit www.IDTechEx.com.

Leo Charlton
Technology Analyst, IDTechEx

The post Edge AI: The Wait is (Almost) Over appeared first on Edge AI and Vision Alliance.

]]>