Market Analysis - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/market-analysis/ Designing machines that perceive and understand. Mon, 02 Oct 2023 16:22:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Market Analysis - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/market-analysis/ 32 32 Emerging Image Sensor Technologies 2024-2034: Applications and Markets https://www.edge-ai-vision.com/2023/10/emerging-image-sensor-technologies-2024-2034-applications-and-markets/ Mon, 02 Oct 2023 16:22:20 +0000 https://www.edge-ai-vision.com/?p=44247 For more information, visit https://www.idtechex.com/en/research-report/emerging-image-sensor-technologies-2024-2034-applications-and-markets/965. Covering image sensors, SWIR, hybrid sensors, hyperspectral imaging, event-based vision, phase imaging, OPD, hybrid sensors, CCD, thin film photodetectors, organic and perovskite photodetectors, InGaAs, quantum image sensors, and miniaturized spectrometer IDTechEx’s “Emerging Image Sensor Technologies 2024-2034: Applications and Markets” report evaluates the most diverse range of image sensors with varying …

Emerging Image Sensor Technologies 2024-2034: Applications and Markets Read More +

The post Emerging Image Sensor Technologies 2024-2034: Applications and Markets appeared first on Edge AI and Vision Alliance.

]]>
For more information, visit https://www.idtechex.com/en/research-report/emerging-image-sensor-technologies-2024-2034-applications-and-markets/965.

Covering image sensors, SWIR, hybrid sensors, hyperspectral imaging, event-based vision, phase imaging, OPD, hybrid sensors, CCD, thin film photodetectors, organic and perovskite photodetectors, InGaAs, quantum image sensors, and miniaturized spectrometer

IDTechEx’s “Emerging Image Sensor Technologies 2024-2034: Applications and Markets” report evaluates the most diverse range of image sensors with varying resolutions and wavelength sensitivities. This technology is set to impact multiple industries from healthcare, biometrics, autonomous driving, agriculture, chemical sensing, and food inspection, among several others. The growing importance of these technologies is expected to contribute towards the growth of this market, with projections of it reaching US$739 million by 2034. This figure is, in fact, conservative because a much larger market value is predicted if these sensors take-off I the consumer electronics sector. As an example, the QD-on-CMOS market would note a 25X increase in revenue value if the consumer electronics space is considered.

Primary insight from interviews with individual players, ranging from established players to innovative start-ups, is included alongside 25 detailed company profiles that include discussion of both technology and business models and SWOT analysis. Additionally, the report includes technological and commercial readiness assessments, split by technology and application. It also discusses the commercial motivation for developing and adopting each of the emerging image sensing technologies and evaluates the barriers to entry.


Overview of emerging sensing technologies covered in this report.

Fundamental topics are covered throughout this report including the evaluation of individual technology readiness levels as well as detailed SWOT analyses for each technology.

From these insights it is possible to predict which technologies are most likely to succeed and which companies have positioned themselves in a more competitive position to thrive in the market.

This report also covers different applications that will benefit from these technologies as well as key challenges they may face in commercializing their products. The rate at which autonomy develops, for instance, will be partly dependent on the maturity of these sensors in the medium to long-term. Increased sensor maturity is synonymous with more cost effective and advanced technology, i.e., more sensitive sensors.

Emerging Image Sensors Go Beyond Visible/IR

While conventional CMOS detectors for visible light are well established and somewhat commoditized, at least for low value applications, there is an extensive opportunity for more complex image sensors that offer capabilities beyond that of simply acquiring red, green, and blue (RGB) intensity values. As such, extensive effort is currently being devoted to developing emerging image sensor technologies that can detect aspects of light beyond human vision. This includes imaging over a broader spectral range, over a larger area, acquiring spectral data at each pixel, and simultaneously increasing temporal resolution and dynamic range.

Much of this opportunity stems from the ever-increasing adoption of machine vision, in which image analysis is performed by computational algorithms. Machine learning requires as much input data as possible to establish correlations that can facilitate object identification and classification, so acquiring optical information over a different wavelength range, or with spectral resolution for example, is highly advantageous.

Emerging image sensor technologies offer many other benefits. Depending on the technology this can include similar capabilities at a lower cost, increased dynamic range, improve temporal resolution, spatially variable sensitivity, global shutters at high resolution, reducing the unwanted influence of scattering, flexibility/conformality, and more. A particularly important trend is the development of much cheaper alternatives to very expensive InGaAs sensors for imaging in the short-wave infra-red (SWIR, 1000-2000 nm) spectral region, which will open this capability to a much wider range of applications. This includes autonomous vehicles, in which SWIR imaging assists with distinguishing objects/materials that appear similar in the visible spectrum, while also reducing scattering from dust and fog.

There are several competitive emerging SWIR technologies. These include hybrid image sensors where an additional light absorbing thin film layer made of organic semiconductors or quantum dots is placed on top of a CMOS read-out circuit to increase the wavelength detection range into the SWIR region. Another technology is extended-range silicon where the properties of silicon are modified to extend the absorption range beyond its bandgap limitations. Currently dominated by expensive InGaAs sensors, these new approaches promise a substantial price reduction which is expected to encourage the adoption of SWIR imaging for new applications such as autonomous vehicles.

Obtaining as much information as possible from incident light is highly advantageous for applications that require object identification, since classification algorithms have more data to work with. Hyperspectral imaging, in which a complete spectrum is acquired at each pixel to product an (x, y, λ) data cube using a dispersive optical element and an image sensor, is a relatively established technology that has gained traction for precision agriculture and industrial process inspection. However, at present most hyperspectral cameras work on a line-scan principle, while SWIR hyperspectral imaging is restricted to relatively niche applications due to the high cost of InGaAs sensors that can exceed US$50,000. Emerging technologies using silicon or thin film materials look set to disrupt both these aspects, with snapshot imaging offering an alternative to line-scan cameras and with the new SWIR sensing technologies method facilitating cost reduction and adoption for a wider range of applications.

Another emerging image sensing technology is event-based vision, also known as dynamic vision sensing (DVS). Autonomous vehicles, drones and high-speed industrial applications require image sensing with a high temporal resolution. However, with conventional frame-based imaging a high temporal resolution produces vast amounts of data that requires computationally intensive processing. Event-based vision is an emerging technology that resolves this challenge. It is a completely new way of thinking about obtaining optical information, in which each sensor pixel reports timestamps that correspond to intensity changes. As such, event-based vision can combine greater temporal resolution of rapidly changing image regions, with much reduced data transfer and subsequent processing requirements.

The report also looks at the burgeoning market of miniaturized spectrometers. Driven by the growth in smart electronics and Internet of Things devices, low-cost miniaturized spectrometers are becoming increasingly relevant across different sectors. The complexity and functionalization of standard visible light sensors can be significantly improved through the integration of miniaturized spectrometers that can detect from the visible to the SWIR region of the spectrum. The future being imagined by researchers at Fraunhofer is a spectrometer weighing just 1 gram and costing a single dollar. Miniaturized spectrometers are expected to deliver inexpensive solutions to improve autonomous efficiency, particularly within industrial imaging and inspection as well as consumer electronics.

IDTechEx has 20 years of expertise covering emerging technologies, including image sensors, thin film materials, and semiconductors. Our analysts have closely followed the latest developments in relevant markets, interviewed key players within the industry, attended conferences, and delivered consulting projects on the field. This report examines the current status and latest trends in technology performance, supply chain, manufacturing know-how, and application development progress. It also identifies the key challenges, competition and innovation opportunities within the image sensor market.

Dr. Miguel El Guendy
Technology Analyst, IDTechEx

Dr. Xiaoxi He
Research Director, Topic Lead, IDTechEx

The post Emerging Image Sensor Technologies 2024-2034: Applications and Markets appeared first on Edge AI and Vision Alliance.

]]>
From ‘Smart’ to ‘Useful’ Sensors https://www.edge-ai-vision.com/2023/10/from-smart-to-useful-sensors/ Mon, 02 Oct 2023 08:01:05 +0000 https://www.edge-ai-vision.com/?p=44241 Can anyone make ‘smart home’ devices useful to consumers? A Calif. startup, Useful Sensors, believes it has the magic potion. What’s at stake? Talk of edge AI, particularly machine learning, has captivated the IoT market. Yet, actual consumer products with local machine learning capabilities, are rare. Who’s ready to pull that off? Will it be …

From ‘Smart’ to ‘Useful’ Sensors Read More +

The post From ‘Smart’ to ‘Useful’ Sensors appeared first on Edge AI and Vision Alliance.

]]>
  • Can anyone make ‘smart home’ devices useful to consumers? A Calif. startup, Useful Sensors, believes it has the magic potion.

  • What’s at stake? Talk of edge AI, particularly machine learning, has captivated the IoT market. Yet, actual consumer products with local machine learning capabilities, are rare. Who’s ready to pull that off? Will it be a traditional MCU supplier or an upstart — like Useful Sensors?

  • Tech jargons like “smart home” and “smart sensor” have been overused to the point where real value that might be delivered by the related technologies reaches most non-techie consumers largely as fog.

    Why, for instance, would any sensible person fiddle with apps, options and swipes on a smartphone to turn off the light when there’s a simple switch within reach?

    Here’s the good news. We found an executive who shares our skepticism about smart homes.


    Pete Warden, Useful Sensors CEO

    Pete Warden is CEO and co-founder of Useful Sensors (Mountain View, Calif.).

    Warden pointedly named his 2022 startup Useful — rather than “Smart” — Sensors. When he spins his dream of a home equipped with Useful Sensors’ hardware and software solutions, Warden calls it “a magical house,” not a smart home.

    In Warden’s magical house, one simply enters a room, looks at a light and says “On.” And then there is light. “Or your TV will pause when you get up to have a cup of tea,” said Warden.

    Useful Sensors’ brainstorm is “low-cost, easy-to-integrate hardware modules that bring machine learning capabilities like gesture recognition, presence detection, and voice interfaces to TVs, laptops, and appliances while preserving users’ privacy,” he explained.

    Warden said, “Consumers might not even know that there is AI in everyday consumer devices. But those devices just do the right thing. That’s why I like the idea of ‘useful,’ instead of ‘smart.’ Smart is kind of like showing off.”

    Emphasizing the importance of usefulness in technologies, he added, “I hope it to be in the background and doing what you want.”

    Deep learning inference on embedded devices is a rapidly growing field with many potential applications. Edge AI is the buzz at every technology conference. Virtually every MCU supplier from STMicroelectronics and Renesas to Silicon Labs is promoting a new MCU with AI capabilities.

    Ex-Google

    Many MCU suppliers try to tackle from a hardware angle the broad challenge of implementing machine learning on resource-constrained processors. They see the name of the game is in running AI stack as efficiently as possible on their tailored hardware, while substantially reducing power consumption.

    Warden, in contrast, sees these challenges and thinks software. After all, he takes credit for coining the term “TinyML,” the application of machine learning in a tiny footprint within embedded platforms.

    Prior to Useful Sensors, Warden was at Google, as one of the members of the original TensorFlow team. But after his team got TensorFlow running well on phones, Warden got busy with TensorFlow Lite Micro, a machine learning framework for embedded systems.

    His obsession with running machine learning on embedded devices only intensified. With Qualcomm senior director Evgeni Gousev, Warden launched the TinyML conference four years ago.

    Eager to evangelize TinyML, Warden also wrote the standard embedded ML text book, called TinyMl, published by O’Reilly.

    He also teaches embedded machine learning at Stanford University.

    ‘Don’t make us think’

    Naturally, Warden and his team were very excited about the potential of ML applications in various consumer devices. They figured they were doing everything necessary for system OEMs to embrace TensorFlow Lite Micro.

    But the real-world reaction to Warden’s pitch was tepid.

    “When I went to talk to light switch manufacturers or TV manufacturers to try and get them to adopt this open-source framework made available in TensorFlow Lite Micro, they listened to me” politely, said Warden. He offered “free code, examples, courses, books, everything else available to system companies.”

    But at the end, Warden’s audience typically said, “Look, we barely have a software engineering team. We do not have a Machine Learning Team. Can you just give us the thing that does a voice interface?” Or, “something that’s ready-made?”

    Warden: “They literally told us, ‘Don’t make us think. Don’t make us code.’”

    After a series of rebuffs, Warden said he realized, “What we needed to do was to put together something that was as easy to integrate as a temperature or pressure or an accelerometer sensor. Those sensors should give people machine learning capabilities.”

    This was a scheme that would require Google to release hardware, an unlikely prospect. “Google wouldn’t want to risk the company’s whole reputation behind hardware products.”

    So, bye bye, Google, and on to Useful Sensors.

    Building blocks of Useful Sensors’ offerings

    Typically, Useful Sensors offers a small module with a sensor on one side and a little microcontroller opposite.

    A “Person Sensor” comes with a small camera and a small MCU. The module is preloaded with software to detect people. It includes a I²C interface. The processing cores used in the module are all off-the-shelf, said Warden, such as Synopsys’ Arc processing core or Arm Cortex M, “to drive down the cost.”

    Useful Sensors are also mindful of privacy issues. “All one can get on our module is the metadata,” said Warden. “So, it can tell, for example, there are three people in the room.”

    As with any consumer electronics device, the tricky part is figuring out the user experience, explained Warden. It’s a challenge to make Useful Sensors’ features “both discoverable and seamless” in consumer appliances. Warden’s theoretical example of subtle consumer symbiosis: “Imagine your TV … when you sat down, it just booted what you were watching last.”

    ‘We do Transformer’

    Useful Sensors’ software expertise shines especially when squeezing software into off-the-shelf hardware.

    Warden boasts that his team knows how to do stuff on $2 hardware that would require someone else a $20 chip.”

    Another advantage of Useful Sensors’ software expertise is its nimbleness in keeping up with rapid advancements in machine learning. “For example, we use Transformers,” said Warden. Transformers is a new building block for machine learning models, different from convolutional learning models familiar to many people already using AI.

    “T” in Chat GPT stands for Transformers, as it deploys the Generative Pre-trained Transformer 3.5 (GPT-3.5).

    Because Transformer employs a different kind of compute, “a lot of the fixed function neural network accelerators that people have brought out in the last year or two, do not support Transformers,” explained Warden. “So it’s an example of why having some degree of programmability is really important for these kinds of hardware, because what people want to run is changing so fast.”

    Who has signed up so far?

    Useful Sensors has signed Non-Disclosure Agreements and Evaluation agreements with several companies whose names Warden declined to reveal. But the company hopes to announce partnerships “over the next few months,” said Warden.

    For AI hardware startups to sign agreements with automotive companies is unimaginably tough – largely because so many startups don’t last. Working with consumer electronics companies is equally tough. Warden agreed. “CE companies have very high standards,” he said, adding: “I am really passionate about trying to do something that actually helps people’s everyday lives, instead of gimmicks. I believe consumer electronics is such a good way to reach people.”

    Warden noted that among the different Useful Sensors’ applications, the one gaining momentum most is “having local voice interfaces” that don’t require sending the voice request to the cloud. Functions from lighting to audio equipment and TVs work better with local voice control, but don’t have the capability today. “That’s why so few people actually use Alexa or Siri,” Warden noted.

    Undoubtedly, local voice will represent a big user experience challenge. Warden, said, optimistically, “There’s something magical about just being able to talk to all of the objects around your home … like a Disney house where you walk in and say. ‘hello, coffee pot!’”

    Bottom line

    Enabling edge AI is easier said than done. The crux of the issue is the lack of talent.  The hardest thing, either for an MCU supplier or a startup like Useful Sensors, is to find skilled people good at machine learning, who also know how to build hardware in the embedded space.

    Junko Yoshida
    Editor in Chief, The Ojo-Yoshida Report


    This article was published by the The Ojo-Yoshida Report. For more in-depth analysis, register today and get a free two-month all-access subscription.

    The post From ‘Smart’ to ‘Useful’ Sensors appeared first on Edge AI and Vision Alliance.

    ]]>
    Unleashing LiDAR’s Potential: A Conversation with Innovusion https://www.edge-ai-vision.com/2023/09/unleashing-lidars-potential-a-conversation-with-innovusion/ Sat, 30 Sep 2023 08:00:32 +0000 https://www.edge-ai-vision.com/?p=44237 This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. The market for LiDAR in automotive applications is expected to reach US$3.9 billion in 2028 from US$169 million in 2022, representing a 69% Compound Annual Growth Rate (CAGR). According to Yole Intelligence’s …

    Unleashing LiDAR’s Potential: A Conversation with Innovusion Read More +

    The post Unleashing LiDAR’s Potential: A Conversation with Innovusion appeared first on Edge AI and Vision Alliance.

    ]]>
    This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group.

    The market for LiDAR in automotive applications is expected to reach US$3.9 billion in 2028 from US$169 million in 2022, representing a 69% Compound Annual Growth Rate (CAGR).

    According to Yole Intelligence’s LiDAR for Automotive 2023 report, Innovusion was the leading player in 2022, having a 28% market share. The LiDAR market in automotive is just starting, and actual LiDAR volume could be tripled in 2023, showing the strong dynamics in this market led by Chinese players, whether OEMs or LiDAR manufacturers.

    Innovusion (latest news) is a global LiDAR manufacturer developing hybrid solid-state LiDAR based on 1,550nm components and using a fiber laser as a light source. They have a successful partnership with NIO, a Chinese automotive OEM, and their LiDAR is installed on the roof of these cars.

    In this context, Pierrick Boulay, Senior Market and Technology Analyst at Yole Intelligence, part of Yole Group, had the opportunity to share its expertise and vision with Yimin Li, CTO and co-founder of Innovusion.

    Discover the details of the conversation below.

    Pierrick Boulay (PB): Please introduce your company and your position.

    I am Yimin Li, the CTO and co-founder of Innovusion. I have a background in Quantum Electronics, and prior to founding Innovusion, I worked at companies such as Velodyne, Baidu, Agilent, AOSense, and GE Healthcare, accumulating over 20 years of experience and expertise in electronic, optical, and laser technologies, including LiDAR.

    I founded Innovusion with Junwei Bao in Silicon Valley in 2016. It’s hard to believe that just seven years later, we are now a global leader in the LiDAR space and that our ability to design and manufacture 1550nm LiDAR at scale is unparalleled. I think that speaks volumes about not just the quality of our underlying technology but, even more so the quality of our global teams. We have R&D teams in the US, in Silicon Valley, as well as in Suzhou & Shanghai, in China. In addition, we also operate state-of-the-art, highly automated, auto-grade manufacturing bases in Suzhou and Ningbo that are fully operational. These were all developed and built from scratch and are producing and delivering at scale for our global automotive partners, like NIO and Faraday Future. For example, our flagship, automotive-grade LiDAR sensor, Falcon, is included as standard equipment on almost all NIO models (including their ET7, ES7, ET5, EC7, ES6, ET5T, and ES8), as part of their highly acclaimed Aquila autonomous driving system.

    In 2022, our LiDAR achieved the No.1 volume and revenue in passenger vehicles globally (even surpassing expectations from Yole!). This year is going even stronger, and we’re on target to surpass last year’s passenger vehicle volume by the end of July 2023, with a forecasted trajectory of continued high-speed growth thereafter.

    At the same time, we have signed strategic contracts with many commercial vehicle partners, such as TuSimple, Encon, Zhito, Plus, and DeepWay, to jointly promote the large-scale application of high-performance LiDAR in the commercial logistics field.

    In addition, our sensors are also vital to smart transportation initiatives and programs, and we’re also partnering with numerous giants of smart transportation, highway, rail, and industrial automation globally in order to help bring the power of LiDAR to improve the efficiency, safety, and flow of traffic in cities and ports around the world.

    PB: Could you briefly introduce Innovusion ‘s LiDAR technology?

    Our company focuses on the development of hardware and software solutions for LiDAR. Currently, we have the Falcon series and Robin series LiDAR hardware products, as well as OmniVidi on the software side.

    Our flagship LiDAR sensors are the Falcon series, which uses 1550nm laser technology. We’ve produced and shipped over 150,000 units of this sensor and are proud to be a critical part of NIO’s Aquila sensor suite and standard equipment on almost all of their new vehicles. These are the only mass-produced 1550nm LiDAR today, and that is something that we are very, very proud of. It is hard enough to get your LiDAR to work in a lab, but to then mass-produce, deliver to your customers like clockwork, and ultimately help power L2+ ADAS systems on cars all around the world? Those are the kinds of things that Junwei and I dreamt of when we started this company, and sometimes it is hard to believe how far we’ve come. From a technical standpoint, Falcon has a maximum detection range of 500 meters (250 meters @ 10% reflectivity), making it ideal for long-range applications. It also features a hybrid solid-state scanning mechanism that allows for high-precision scanning and dynamic focusing. When accounting for the precision, the production at scale, and our ability to work closely with partners and customize the details as needed, Falcon really stands in a class of its own at the moment. And it keeps getting better


    Falcon – Automotive-grade Ultra-long Range Front-view LiDAR

    Next is our Robin platform, which has a 905nm laser light source. It also leverages hybrid solid-state technology and incorporates cutting-edge electronics and optical technology and a highly modular architecture to achieve excellent product performance and adaptability in various laser detection scenarios.

    Robin comes in two models – Robin-E & Robin-W:

    • Robin-E is an advanced long-range forward-looking LiDAR that currently achieves the leading detection level amongst forward-looking lasers in the industry. It has a maximum detection range of 250 meters and a standard detection range of 180m@10%. It features a resolution of 0.1° × 0.2° and a field of view of 120° × 24°. The overall design is exquisite and compact, with the ability to achieve a curved surface design on the optical window, seamlessly fitting the vehicle’s structure and appearance. It can easily be integrated into different positions, such as headlights and bumpers. In fact, we’re currently collaborating with a leading automotive glass manufacturer to explore new aesthetic and practical installation ideas for our LiDAR, including the co-development of a rear windshield installation solution featuring Robin-E.
    • Robin-W is the sibling of Robin-E. It’s a high-performance medium- to short-range wide-angle LiDAR and was designed with side and rear installations in mind. The stats are still fantastic. The standard detection range, for example, is 70 meters at 10%. That is twice the ranging capability of similar products in the market right now. The same thing with the resolution: 0.1° × 0.4°, which is significantly higher than the detection accuracy of similar products. This is important because the high resolution provides more accurate target recognition, ensuring clear visibility and enabling early identification and appropriate reactions. According to our calculations, in typical scenarios such as high-speed lane change, unprotected turns in urban areas, and vehicle parking, the detection distance in the side and rear directions needs to be at least 70 meters to meet safety requirements. Those considerations drive our design and inspire us to keep pushing further.


    Robin-E – Image-grade Long Range Front-view LiDAR

    Another thing that really sets us apart is that we don’t just design our sensors to have great specs and look great on paper – we design them to be easy to manufacture and, more importantly, easy for our partners to work with and design around. The power consumption, noise levels, and heat dissipation of the Robin line speak to this perfectly. Robin sensors –both Robin-E and Robin-W – have ultra-low power consumption –less than 10w. To put that in context, this is the first time the power consumption of automotive-grade LiDAR has been reduced to single digits. This is significantly lower than the mainstream products in the current forward-looking LiDAR market, and when paired with the low noise level (below 20 dBA) and integrated heat dissipation, Robin-E is a very friendly sensor to work with from an engineering and design integration perspective.


    Robin-W – High-performance Mid-to-short Range Wide-FOV LiDAR

    But then it gets better. The cherry on top of all this is our OmniVidi software platform. This is perception middleware that serves as a complete perception solution. It incorporates cutting-edge deep learning frameworks and provides a comprehensive toolchain – including algorithm model components, perception function suites, intelligent data, and quantitative evaluation. By combining various advanced technologies and traditional methods, such as the SightNet model for single LiDAR forward perception, lightweight fusion perception model for multiple LiDARs, spatiotemporal fusion 3D object detection and tracking algorithms, clustering, and Kalman filtering, it effectively reduces computational load. It also integrates both BEV and RV dual-mode detectors to strike a balance between real-time performance and accuracy, enabling efficient detection of surrounding objects and delivering outstanding perception capabilities.

    PB: LiDAR has traditionally been seen as a high-cost component, especially those based on fiber lasers. What strategies or innovations is your company implementing to make LiDAR solutions more affordable and accessible to automotive manufacturers?

    Let me say that I don’t think the true value of LiDAR has even been realized yet. It is an extremely powerful technology with so many potential applications and possible impacts down the road, only a handful of which are known at the moment.

    That said, the price. I’ll start by noting that our first focus is, and always will be, the quality of our products – their precision, their performance, their reliability. That said, we are also always very cost-conscious and are continually looking to optimize our products and reduce costs where it makes sense.

    We are actively trying to bring down the costs of producing our LiDAR and have already taken many active steps towards that goal. Building our production capabilities and supply chains from scratch meant we needed to invest a lot early on. But as volume increases and efficiencies of scale become evident, there will be significant room for cost reductions. This requires continuous expansion of the ecosystem and ongoing investments, but it also means that many components of 1550nm LiDAR that are currently expensive are beginning to come down in price as our technology advances and production scales up. With this, you will also see the price gap between 1550nm and 905nm LiDAR start shrinking, with our expectation being that the cost difference between the two will eventually be minimal, around 5% to 10%.

    In addition to keeping an eye on current costs, we’ve also created and begun to implement a roadmap for meaningful cost reductions and concrete plans for achieving them. In the meantime, Innovusion is outperforming our relevant competition by a wide margin in terms of current performance and cost-effectiveness, and we’ll continue to focus on delivering valuable, full-lifecycle solutions to customers at prices they can afford.

    PB: As LiDAR technology advances, how do you anticipate it will impact the overall design and aesthetics of vehicles? Will LiDAR sensors become more discreet and seamlessly integrated?

    Innovusion is dedicated to helping our OEM partners achieve their design visions and providing them with the most flexible and least intrusive options for their designs. While this can mean offering higher-precision sensors that consume less power and generate less heat, it also means delivering products that seamlessly fit into the design visions and demands of our partners. To this end, the products we collaborate on with customers undergo intense customization to meet their specific design requirements. While the OEMs will typically take the lead in the design process, we then work closely with them to make those visions a reality. This often means there is a deep degree of customization that needs to happen with the physical design of LiDAR itself, like the curvature of the optical window or the size of the device.

    What we really love about this collaborative process, though, is that it can also drive aesthetic design innovations that we are then able to leverage with other products or partners down the road. The power and spirit of this collaboration is perhaps best seen in the NIO ET7 being awarded a Red Dot design award for the smooth integration of LiDAR and other sensors into their autonomous driving capabilities. As the jury noted, ‘The reduced design of the NIO ET7 and the comprehensive use of smart technology merge into a harmonious overall appearance’.

    In addition to working closely with our OEM partners to deliver on their vision, we’re also proactively driving the design possibilities of LiDAR through additional research partnerships with major auto suppliers like Fuyao and Wideye. These allow us to explore different installation positions and methods – like installation behind the windshield, within bumpers and headlights, or within the roofline – to better align with the overall aesthetic design of vehicles.

    PB: What advancements or developments in LiDAR technology do you believe are necessary to overcome the limitations or challenges currently faced in automotive applications?

    There are a number of key areas that we are actively focused on that we believe will help speed up the mass adoption of LiDAR across all manufacturers. First, there is the continuous progress and integration of laser detectors, which will further improve the performance and reduce the cost of LiDAR. In addition, the continuous improvement of LiDAR signal processing algorithms will greatly assist in vehicle control. Finally, as the capabilities of LiDAR continue to develop, there is a constant expansion of the definition of LiDAR usage scenarios and requirements. Currently, front-view lidar products can’t fully meet the demand. In the future, there will be a need for targeted products for new requirements and new scenarios, such as side-view LiDAR.

    PB: LiDAR technology has traditionally relied on mechanical scanning systems, but solid-state LiDAR solutions are gaining attention. What advantages and challenges do you see in adopting solid-state LiDAR for automotive applications? How is Innovusion positioned regarding this transition towards solid-state LiDAR?

    There has been a lot of buzz around solid-state LiDAR for years, and with good reason. Solid-state LiDAR technology holds the potential for high reliability, but the current technology is still immature and has challenges to solve regarding detection range and field of view. Once these are solved, manufacturing becomes the next challenge and will need to be proven at scale.

    In contrast, Falcon, a leading hybrid solid-state LiDAR, is being produced at scale, all while meeting the stringent automotive-grade reliability standards of the industry and the demanding technical expectations of our partners. Solid-state LiDAR just can’t deliver in the same way today. Will solid-state get there and replace hybrid? We’ll see, but for the moment, hybrid solid-state is the best LiDAR technology available and the only one deliverable at scale to meet our partners’ demanding needs.

    Moving forward, we will continue to evaluate and research a wide variety of LiDAR technologies and approaches and will always be dedicated to providing our customers with the best LiDAR and sensing technologies available. We’ll continue to select the technology route that we consider the most mature, suitable, and cost-effective based on practical usage scenarios.

    PB: LiDAR technology often works in conjunction with other sensor systems like cameras and radar. How do you envision the synergy between LiDAR and these complementary technologies in enabling safer and more reliable autonomous driving systems?

    We expect these three sensor technologies to coexist and complement each other for a long time. LiDAR technology itself plays an undeniably critical role in autonomous driving systems by providing high-precision 3D information that other sensors are unable to generate. But when paired with other sensor systems, such as cameras and radars, there are synergistic benefits to the overall safety and reliability of the system. For example, cameras provide high-resolution images for object and scene recognition. LiDAR, on the other hand, provides precise distance and 3D spatial information, detecting objects that may be challenging for cameras, such as pedestrians or obstacles in low-light conditions. However, through the collaboration of cameras and LiDAR, more comprehensive and accurate perception information can be obtained, and the surrounding environment can be perceived and understood more accurately and holistically. The result is that autonomous driving systems are able to make better and quicker decisions, leading to safer, more comfortable experiences for us, our friends, our families, and society as a whole.

    PB: Looking ahead, what do you foresee as the next major breakthrough or innovation in LiDAR technology that will have a transformative impact on the automotive industry?

    With LiDAR already being recognized as a necessary sensor for intelligent driving and with our Falcon sensors already in tens of thousands of cars on roads all around the world, we’re really excited about the transformative impacts that it is already enabling.

    That said, we’re just getting started and think LiDAR is also. We’re actively exploring new technologies and approaches, and we’re really excited about some of the promising new technologies we’re working on. For example, while 1550nm and 905nm have been established as the de facto wavelength bands for emitted lasers, what happens when you explore higher bands? What characteristics of detected targets can be improved, and what limitations of current LiDAR systems can we mitigate? These are the kinds of questions we’re asking and the kind of research we’re conducting, all in the hopes of helping to spark the next big wave of innovations in LiDAR and sensing technology.

    PB: Is there anything else you would like to add?

    Thank you so much. We’re excited to help bring the power of intelligent vision to everyone and everything and can’t wait to see all the possibilities that will be unlocked as a result. With LiDAR, we think the future is bright, and we’re excited to be a part of such a dynamic and growing industry.

    Related articles

    The post Unleashing LiDAR’s Potential: A Conversation with Innovusion appeared first on Edge AI and Vision Alliance.

    ]]>
    SiP Market Soars on the Wings of Chiplets and Heterogeneous Integration https://www.edge-ai-vision.com/2023/09/sip-market-soars-on-the-wings-of-chiplets-and-heterogeneous-integration/ Thu, 28 Sep 2023 20:00:18 +0000 https://www.edge-ai-vision.com/?p=44176 This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. The growth of the SiP market is propelled by the trends in 5G, AI, HPC, autonomous driving, and IoT. OUTLINE The SiP market is forecast to reach US$33.8 billion by 2028, showcasing …

    SiP Market Soars on the Wings of Chiplets and Heterogeneous Integration Read More +

    The post SiP Market Soars on the Wings of Chiplets and Heterogeneous Integration appeared first on Edge AI and Vision Alliance.

    ]]>
    This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group.

    The growth of the SiP market is propelled by the trends in 5G, AI, HPC, autonomous driving, and IoT.

    OUTLINE

    • The SiP market is forecast to reach US$33.8 billion by 2028, showcasing a robust 8.1% CAGR.

    • The growth of the SiP market is fueled by the increasing adoption of various technology trends, including heterogeneous integration, chiplet technology, package footprint reduction, and cost optimization, particularly within market segments such as 5G, AI , HPC , autonomous driving, and IoT.

    • The SiP supply chain is becoming increasingly competitive and emphasizes collaboration for optimal results: Asia dominates the market. OSATs are leading the competitive landscape.

    The SiP market was worth US$21.2 billion in 2022 and is projected to reach US$33.8 billion by 2028, growing at an 8.1% CAGR. This growth is fueled by trends like heterogeneous integration, chiplet technology, package optimization, and cost-efficiency, particularly in 5G, AI, HPC, autonomous driving, and IoT sectors. Yole Group’s analysts forecast that the mobile and consumer segment, which accounted for 89% of the 2022 revenues, will maintain a 6.5% CAGR, driven by 2.5D/3D technologies, HD FO , and FC/WB SiPs.

    “While mobile and consumer remain static in the overall semiconductor market, they are thriving in SiP due to 5G and computing trends. Telecom & infrastructure, automotive, and industrial sectors are the fastest-growing SiP markets, with telecom & infrastructure expecting 20.2% growth and automotive a 15.3% CAGR.”
    Yik Yee Tan, Ph.D.
    Senior Technology and Market Analyst, Packaging, Semiconductor, Memory, and Computing Division, Yole Intelligence (part of the Yole Group)

    In its new System-in-Package 2023 report, Yole Intelligence explores the SiP industry with market forecasts and trends.

    With this new product, the company analyses the related technologies and the trends toward 2.5/3D solutions. Indeed, SiP technology trends remain aggressive as the industry continues to demand more integration to allow reduced form factors and higher-performance products. In the mobile and consumer market, for example, footprint optimization is paramount because space is limited. This is particularly valid for smartphones, wearables, and other devices. For instance, the penetration of 5G in high-end smartphones has driven the adoption of SiP for RF and connectivity modules, with the need to integrate more components and shorten interconnections to achieve the required performance.

    Furthermore, this new report includes specific sections focused on the adoption of these technologies as well as details of the ecosystem, including the supply chain, competitive landscape, and market shares.

    “The SiP market is intensifying competition as SiP technologies gain prominence due to chiplets, heterogeneous integration, cost optimization, and footprint reduction trends, attracting more entrants.”
    Gabriela Pereira
    Technology and Market Analyst, Packaging, Semiconductor, Memory, and Computing Division, Yole Intelligence (part of the Yole Group)

    Indeed, the SiP supply chain is becoming increasingly competitive, and the focus is on collaboration for optimal results. More partnership between chip and memory players, foundries, and others are increasing, aiming to introduce cutting-edge technologies.

    So, what is the status of each strategic region? Asia dominates the SiP market with a 77% share, with Japan leading at 41%, primarily driven by Sony’s 3D CIS market. North America holds 23%, thanks to contributions from Amkor and Intel, while Europe accounts for 2%.

    From a business model point of view, FC/WB SiP is chiefly driven by OSATs like ASE, Amkor, JCET, TFME, PTI, Huatian, ShunSin, and Inari. TSMC dominates FO SiP with its InFO line, and Sony’s CIS market leads in 2.5D/3D SiP, followed by TSMC with Si interposer, Si bridge, and 3D SoC stacking.

    To maintain competitiveness, companies explore M&As and capacity expansion, offering comprehensive solutions to reduce time-to-market. These trends span various SiP market segments, including IDMs, OSATs, foundries, IC substrate suppliers, and EMS. OSATs, comprising 32% of the SiP market in 2022, focus on full-turnkey solutions and plan to invest in advanced SiP offerings. IDMs, accounting for 48%, develop proprietary packaging technologies, while foundries, mainly TSMC, hold 17% with advanced assembly capabilities. IC substrate suppliers are entering the market, and EMS models are expected to grow, especially in wearables. China’s SiP market presence is expanding, with compatibility and interest in packaging technologies for chiplets and hybrid bonding to enhance competitiveness.

    Acronyms

    • SiP : System-in-Package
    • CAGR : Compound Annual Growth Rate
    • AI : Artificial Intelligence
    • HPC : High Performance Computing
    • IoT : Internet of Things
    • HD FO : High-Density Fan-Out
    • FC/WB : Flip-Chip Wire-Bond
    • CIS : CMOS Image Sensor
    • SoC : System-on-Chip
    • IDM : Integrated Device Manufacturer
    • IC : Integrated Circuit

    Yole Intelligence’s semiconductor packaging team invites you to follow the technologies, related devices, applications, and markets on www.yolegroup.com.

    In this regard, do not miss Yik Yee Tan’spresentation “Global Vehicle Electrification Trends and BEV Opportunities in the Developing World” on November 8 during ISES South East Asia in Penang, Malaysia.

    Yik Yee Tan is a Senior Technology and Market Analyst and part of the packaging team at Yole Intelligence.

    Ask for a meeting with our experts at Yole Group’s booth: events@yolegroup.com.

    Stay tuned!

    The post SiP Market Soars on the Wings of Chiplets and Heterogeneous Integration appeared first on Edge AI and Vision Alliance.

    ]]>
    AI and the Road to Full Autonomy in Autonomous Vehicles https://www.edge-ai-vision.com/2023/09/ai-and-the-road-to-full-autonomy-in-autonomous-vehicles/ Thu, 28 Sep 2023 16:52:28 +0000 https://www.edge-ai-vision.com/?p=44150 The road to fully autonomous vehicles is, by necessity, a long and winding one; systems that implement new technologies that increase the driving level of vehicles (driving levels being discussed further below) must be rigorously tested for safety and longevity before they can make it to vehicles that are bound for public streets. The network …

    AI and the Road to Full Autonomy in Autonomous Vehicles Read More +

    The post AI and the Road to Full Autonomy in Autonomous Vehicles appeared first on Edge AI and Vision Alliance.

    ]]>
    The road to fully autonomous vehicles is, by necessity, a long and winding one; systems that implement new technologies that increase the driving level of vehicles (driving levels being discussed further below) must be rigorously tested for safety and longevity before they can make it to vehicles that are bound for public streets. The network of power supplies, sensors, and electronics that is used for Advanced Driver Assistance Systems (ADAS) – features of which include emergency braking, adaptive cruise control, and self-parking systems – is extensive, with the effectiveness of ADAS being determined by the accuracy of the sensing equipment coupled with the accuracy and speed of analysis of the on-board autonomous controller.

    The on-board analysis is where artificial intelligence comes into play and is a crucial element to the proper functioning of autonomous vehicles. In market research company IDTechEx‘s recent report on AI hardware at the edge of the network, “AI Chips for Edge Applications 2024 – 2034: Artificial Intelligence at the Edge“, AI chips (those pieces of semiconductor circuitry that are capable of efficiently handling machine learning workloads) are projected to generate revenue of more than USD$22 billion by 2034, and the industry vertical that is to see the highest level of growth over the next ten year period is the automotive industry, with a compound annual growth rate (CAGR) of 13%.


    Circuitry and electrical components within a car, many of which work together to comprise ADAS.

    The part that AI plays

    The AI chips used by automotive vehicles are found in centrally located microcontrollers (MCUs), which are, in turn, connected to peripherals such as sensors and antennae to form a functioning ADAS. On-board AI compute can be used for several purposes, such as driver monitoring (where controls are adjusted for specific drivers, head and body positions are monitored in an attempt to detect drowsiness, and the seating position is changed in the event of an accident), driver assistance (where AI is responsible for object detection and appropriate corrections to steering and braking), and in-vehicle entertainment (where on-board virtual assistants act in much the same way as on smartphones or in smart appliances). The most important of the avenues listed above is the latter, driver assistance, as the robustness and effectiveness of the AI system determines the vehicle’s autonomous driving level.

    Since its launch in 2014, the SAE Levels of Driving Automation (shown below) have been the most-cited source for driving automation in the automotive industry, which defines the six levels of driving automation. These range from level 0 (no driving automation) to level 5 (full driving automation). The current highest state of autonomy in the private automotive industry (incorporating vehicles for private use, such as passenger cars) is SAE Level 2, with the jump between level 2 and level 3 being significant, given the relative advancement of technology required to achieve situational automation.


    The SAE levels of driving automation.

    A range of sensors installed in the car – where those rely on LiDAR (Light Detection and Ranging) and vision sensors, among others – relay important information to the main processing unit in the vehicle. The compute unit is then responsible for analysing this data and making the appropriate adjustments to steering and braking. In order for processing to be effective, the machine learning algorithms that the AI chips employ must be extensively trained prior to deployment. This training involves the algorithms being exposed to a great quantity of ADAS sensor data, such that by the end of the training period they can accurately detect objects, identify objects, and differentiate objects from one another (as well as objects from their background, thus determining the depth of field). Passive ADAS is where the compute unit alerts the driver to necessary action, either via sounds, flashing lights, or physical feedback. This is the case in reverse parking assistance, for example, where proximity sensors alert the driver to where the car is in relation to obstacles. Active ADAS, however, is where the compute unit makes adjustments for the driver. As these adjustments occur in real time and need to account for varying vehicle speeds and weather conditions, it is of great importance that the chips that comprise the compute unit are able to make calculations quickly and effectively.

    A scalable roadmap


    The trends for automotive SoCs is for performance to increase with each year while node process moves toward the most leading edge.

    SoCs for vehicular autonomy have only been around for a relatively short amount of time, yet it is clear that there is a trend towards smaller node processes, which aid in delivering higher performance. This makes sense logically, as higher levels of autonomy will necessarily require a greater degree of computation (as the human computational input is effectively outsourced to semiconductor circuitry). The above graph collates the data of 11 automotive SoCs, one of which was released in 2019, while others are scheduled for automotive manufacturers’ 2024 and 2025 production lines. Among the most powerful of the SoCs considered are the Nvidia Orin DRIVE Thor, which is expected in 2025, where Nvidia is asserting a performance of 2000 Trillion Operations Per Second (TOPS), and the Qualcomm Snapdragon Ride Flex, which has a performance of 700 TOPS and is expected in 2024.

    Moving to smaller node sizes requires more expensive semiconductor manufacturing equipment (particularly at the leading edge, as Deep Ultraviolet and Extreme Ultraviolet lithography machines are used) and more time-consuming manufacture processes. As such, the capital required for foundries to move to more advanced node processes proves a significant barrier to entry to all but a few semiconductor manufacturers. This is a reason that several IDMs are now outsourcing high-performance chip manufacture to those foundries already capable of such fabrication.

    In order to keep costs down for the future, it is also important for chip designers to consider the scalability of their systems, as the stepwise movement of increasing autonomous driving level adoption means that designers that do not consider scalability at this juncture run the risk of spending more for designs at ever-increasing nodes. Given that 4 nm and 3 nm chip design (at least for the AI accelerator portion of the SoC) likely offers sufficient performance headroom up to SAE Level 5, it behooves designers to consider hardware that is able to adapt to handling increasingly advanced AI algorithms.

    It will be some years until we see cars on the road capable of the most advanced automation levels proposed above, but the technology to get there is already gaining traction. The next couple of years, especially, will be important ones for the automotive industry.

    Report coverage

    IDTechEx forecasts that the global AI chips market for edge devices will grow to US$22.0 billion by 2034, with AI chips for automotive accounting for more than 10% of this figure. IDTechEx’s report gives analysis pertaining to the key drivers for revenue growth in edge AI chips over the forecast period, with deployment within the key industry verticals – consumer electronics, industrial automation, and automotive – reviewed. Case studies of automotive players’ leading system-on-chips (SoCs) for ADAS are given, as are key trends relating to performance and power consumption for automotive controllers.

    More generally, the report covers the global AI Chips market across eight industry verticals, with 10-year granular forecasts in six different categories (such as by geography, by chip architecture, and by application). IDTechEx’s report “AI Chips for Edge Applications 2024 – 2034: Artificial Intelligence at the Edge” answers the major questions, challenges, and opportunities the edge AI chip value chain faces. For further understanding of the markets, players, technologies, opportunities, and challenges, please refer to the report.

    For more information on this report, please visit www.IDTechEx.com/EdgeAI, or for the full portfolio of AI research available from IDTechEx please visit www.IDTechEx.com/Research/AI.

    About IDTechEx

    IDTechEx guides your strategic business decisions through its Research, Subscription and Consultancy products, helping you profit from emerging technologies. For more information, contact research@IDTechEx.com or visit www.IDTechEx.com.

    The post AI and the Road to Full Autonomy in Autonomous Vehicles appeared first on Edge AI and Vision Alliance.

    ]]>
    Optical Imaging: PISÉO On the Starting Blocks https://www.edge-ai-vision.com/2023/09/optical-imaging-piseo-on-the-starting-blocks/ Fri, 22 Sep 2023 14:44:54 +0000 https://www.edge-ai-vision.com/?p=44053 This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. PISÉO broadens its field of expertise in optical radiation measurement. OUTLINE The new service offering is aimed at the medical, automotive, XR, Industry 4.0, defense, and aeronautics markets. Among the characteristics measured …

    Optical Imaging: PISÉO On the Starting Blocks Read More +

    The post Optical Imaging: PISÉO On the Starting Blocks appeared first on Edge AI and Vision Alliance.

    ]]>
    This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group.

    PISÉO broadens its field of expertise in optical radiation measurement.

    OUTLINE

    • The new service offering is aimed at the medical, automotive, XR, Industry 4.0, defense, and aeronautics markets.

    • Among the characteristics measured are MTF, geometric distortion, signal-to-noise ratio, field of view, sharpness, contrast, colorimetry, etc.

    • PISÉO’s expertise are recognized by the certifications received from relevant approving bodies, including product approval organizations in the automotive market segment.

    PISÉO, Yole Group’s partner, is pleased to announce the launch of its optical imaging testing service. This new offering is the result of a journey based on 12 years of in-depth experience in lighting. The company, ISO 17025-accredited by COFRAC, is a major player in photometry and colorimetry. PISÉO is uniquely positioned with a comprehensive service offering, including support in the design and industrialization of optical solutions. Its global expertise in norms and standards and its active involvement in the Committees for Standardization since its creation give PISÉO a significant advantage in providing adequate testing programs and meeting the requirements of a complex normative system in constant evolution.

    Set up by experts in optical imaging solutions over a year ago, the laboratory benefits from cutting-edge equipment essential for a reliable and accurate characterization of the entire optical chain for imaging systems in high-value-added areas. Several companies in the medical field (manufacturers of medical endoscopes or dental equipment) and XR – which includes AR, MR , and VR – already rely on PISÉO. The laboratory is involved during the development phase as part of potential product creation, or just before the product launch for the purpose of certification.

    In the automotive industry, the company is also mandated by approval bodies in order to demonstrate, through independent and rigorous testing, the compliance of on-board optical imaging systems with international regulatory requirements – CMS, front cameras, DMS, etc. – which supports the features of ADAS. Indeed, the market demand is very strong in this sector. In its Imaging for Automotive report, Yole Intelligence, part of Yole Group, states that there were 2.6 cameras per car produced in 2021. This number should rise to 4.6 cameras per car produced in 2027, with high-end vehicles potentially fitted with at least double that number.

    “We offer our clients assistance in understanding the standards, interpreting the results, and certifying their products.”
    Joël Thomé
    CEO, PISÉO

    Acronyms

    • MTF : Modulation Transfer Function
    • XR : Extended Reality
    • AR : Augmented Reality
    • MR : Mixed Reality
    • VR : Virtual Reality
    • CMS : Camera Monitor Systems
    • DMS : Driver Monitoring Systems
    • ADAS : Advanced Driver Assistance Systems


    Yole Group and PISÉO teams invite you to follow the technologies, related devices, applications, and markets on www.yolegroup.com and piseo.fr.

    Stay tuned!

    The post Optical Imaging: PISÉO On the Starting Blocks appeared first on Edge AI and Vision Alliance.

    ]]>
    New RISC-V Market Report Will Provide 5-year Growth Projections for Semiconductor Devices and Insights on the Enabling Ecosystem https://www.edge-ai-vision.com/2023/09/new-risc-v-market-report-will-provide-5-year-growth-projections-for-semiconductor-devices-and-insights-on-the-enabling-ecosystem/ Tue, 19 Sep 2023 16:42:13 +0000 https://www.edge-ai-vision.com/?p=43994 SAN JOSE, Calif., September 19, 2023 (Newswire.com) – The SHD Group, a leading strategic marketing and business development firm, today announced its plans to create a detailed market research report that will provide an in-depth analysis of the RISC-V market. It will include design-start projections from 40nm to 2nm by application, RISC-V IP, ecosystem directions, …

    New RISC-V Market Report Will Provide 5-year Growth Projections for Semiconductor Devices and Insights on the Enabling Ecosystem Read More +

    The post New RISC-V Market Report Will Provide 5-year Growth Projections for Semiconductor Devices and Insights on the Enabling Ecosystem appeared first on Edge AI and Vision Alliance.

    ]]>
    The SHD Group, a leading strategic marketing and business development firm, today announced its plans to create a detailed market research report that will provide an in-depth analysis of the RISC-V market. It will include design-start projections from 40nm to 2nm by application, RISC-V IP, ecosystem directions, and a 5-year forecast. The report will be available in the 4th quarter of 2023.

    The analyst report will be authored by Rich Wawrzyniak, who has over 20 years of semiconductor industry experience, specifically in market research and analysis. His research experience encompasses ASICs, SoCs, SIP, memory, design starts, and emerging technologies, such as AI, RISC-V, and chiplets. Formerly a principal market analyst at Semico, he is currently a principal analyst at The SHD Group. Rich is a graduate of Loyola University in Chicago, Ill.

    This new RISC-V Market Report will include:

    • Market analysis of the current and future landscape for RISC-V applications.
    • RISC-V-based SoC unit volume and revenue projections with 5-year forecasts for RISC-V growth by application segment.
    • Related IP market trends, including projected growth, licensing, royalty, and maintenance revenues.
    • Software tools and ecosystem for RISC-V, highlighting their role in enabling RISC-V technology design wins.
    • Regional insights related to RISC-V adoption and growth in North America, Europe, Japan, China, and the Asia Pacific region.

    “We are developing this RISC-V Market Report to provide much-needed insights into this fast-paced and evolving market,” said Derek Meyer, CEO of The SHD Group. “RISC-V has emerged as a disruptive force in the semiconductor industry, and our report will equip stakeholders with the knowledge needed to make informed decisions about the future direction of their companies and products.”

    Calista Redmond, CEO of RISC-V International, commented: “RISC-V stands at the forefront of innovation, driving the future of semiconductor design with its open standard instruction set architecture (ISA). We are pleased that The SHD Group is developing this report and believe it will serve as an invaluable guide for our 3,800 members in over 70 countries who define RISC-V open specifications.”

    RISC-V, an open and versatile ISA, has gained significant traction in shaping the future of the next generation of semiconductor design. The RISC-V Market Report promises to be an invaluable resource for industry executives, investors, and designers seeking a comprehensive understanding of this evolving landscape.

    If you are a RISC-V technology provider and wish to schedule a briefing with Rich Wawrzyniak about your products for this upcoming Market Report, please reserve a time at: https://calendly.com/shd-r-wawrzyniak/risc-v-briefing-with-rich-wawrzyniak

    About The SHD Group

    The leadership team at The SHD Group has a decades-long track record of successful business development rooted in strategic market analysis. The company specializes in providing its clients with essential insights required to navigate the complexities of emerging markets and to formulate strategies for capturing market share in both new and established technology sectors. Our experience spans diverse industries, including AI, processors, sensors, systems, design services, and silicon, with a significant presence in consumer, automotive, industrial, and computing sectors. At The SHD Group, we are committed to being your reliable partner in achieving lasting success. Learn more at TheSHDGroup.com.

    The post New RISC-V Market Report Will Provide 5-year Growth Projections for Semiconductor Devices and Insights on the Enabling Ecosystem appeared first on Edge AI and Vision Alliance.

    ]]>
    Mobile Robotics: Increasing Flexibility Enables Increasing Efficiency in Logistics https://www.edge-ai-vision.com/2023/09/mobile-robotics-increasing-flexibility-enables-increasing-efficiency-in-logistics/ Mon, 18 Sep 2023 14:55:55 +0000 https://www.edge-ai-vision.com/?p=43913 Mobile robots have experienced substantial growth in the last decade due to their autonomous mobility, which has been propelled by advancements in robotics technology, autonomous navigation, and artificial intelligence. IDTechEx’s market research report, titled “Mobile Robotics in Logistics, Warehousing, and Delivery 2024-2044“, delves into the technical, regulatory, and market aspects influencing the emerging logistics mobile …

    Mobile Robotics: Increasing Flexibility Enables Increasing Efficiency in Logistics Read More +

    The post Mobile Robotics: Increasing Flexibility Enables Increasing Efficiency in Logistics appeared first on Edge AI and Vision Alliance.

    ]]>
    Mobile robots have experienced substantial growth in the last decade due to their autonomous mobility, which has been propelled by advancements in robotics technology, autonomous navigation, and artificial intelligence. IDTechEx’s market research report, titled “Mobile Robotics in Logistics, Warehousing, and Delivery 2024-2044“, delves into the technical, regulatory, and market aspects influencing the emerging logistics mobile robot industry.

    According to IDTechEx’s research, there has been a significant cumulative increase in funding for various types of mobile robots (e.g., mobile picking, intralogistics, material handling, etc.) from 2015 to 2022. This trend highlights the potential of mobile robots to automate various logistics operations, including material handling, material picking, long-haul distribution, and last-mile delivery.

    As of 2023, some applications, such as material transport using automated guided vehicles (AGVs), have already reached a mature stage, attracting billions of dollars in annual revenue. However, other applications, like drone delivery, are still emerging and are not expected to see widespread deployment until the end of this decade due to regulatory constraints and specific technological challenges. Despite varying market readiness levels, each segment is experiencing rapid growth, with instances of technology giants acquiring start-up companies. Some noteworthy recent examples include the acquisition of Robotnik by United Robotics Group, Amazon’s acquisition of iRobot, and the merger of Mobile Industrial Robots (MiR, a Teradyne company) with AutoGuide Mobile Robots.

    IDTechEx’s report, “Mobile Robotics in Logistics, Warehousing and Delivery 2024-2044”, examines the key products used in various logistics operations, including AGVs (e.g., tow tractors, forklift AGVs, unit load carts, ‘goods to person’ grid based AGVs, etc.), autonomous mobile robots (AMRs), case-picking robots, mobile manipulators, heavy load autonomous mobile vehicles (AMVs), and last-mile delivery robots (vans, sidewalk robots, and drones). The report highlighted recent technology advancements and commercial transitions, such as how Amazon’s first fully autonomous mobile robot, Proteus, is expected to drive the adoption of AMRs in warehouses.

    In terms of the technology, IDTechEx analyzed the teardown components of various mobile robots to offer an in-depth technology analysis of emerging technologies underpinning the growth of mobile robots. These technologies range from sensors such as LiDAR, ultrasonic sensors, and cameras to software such as computer vision, simultaneous localization and mapping (SLAM), and other navigation and sensing technologies. IDTechEx interviewed a number of industry players ranging from leading companies such as Omron, MiR, and ForwardX Robotics to mid-sized companies and start-ups to discuss the unique value propositions of their technologies, how they can help to address the pain points, along with the most updated business information such as regulatory approvals, funding status, and latest flagship products.

    One of the transitions IDTechEx spotted is the transition from AGVs to AMRs. AGVs have been widely adopted in many warehouses. Unlike AMRs, AGVs rely on infrastructures such as magnetic tapes, QR codes, and markers to navigate. Although this provides high navigation accuracy, it increases the total cost of setting up the infrastructure and restricts the flexibility of AGVs. With the emerging SLAM technology and more companies demanding flexible operation, IDTechEx has noticed a few AMRs being used in factories and warehouses, for example, Amazon’s Proteus. The IDTechEx report analyzes both AGVs and AMRs and provides an independent analysis of what scenarios these two robot types would be favorable to users.

    The report also provides 20-year market forecasts for the future of mobile robotics in the logistics industry, including granular breakdowns by application area and product categories. Through IDTechEx’s research, the yearly market size of mobile robots (L4 trucks exclusive) will reach around US$150 billion, representing significant opportunities for component suppliers, robot OEMs, and end-users.

    To find out more about IDTechEx’s technical and commercial analysis of mobile robotics in the logistics industry, please see the IDTechEx report, “Mobile Robotics in Logistics, Warehousing, and Delivery 2024-2044“.

    Free-to-Attend Upcoming Webinar: Mobile Robotics in Logistics – Over 20-fold Revenue Increase in the Next Two Decades

    Yulin Wang, Technology Analyst at IDTechEx and author of this article, will be presenting a free-to-attend webinar on the topic on Tuesday 10 October 2023Mobile Robotics in Logistics – Over 20-fold Revenue Increase in the Next Two Decades.

    The webinar’s agenda includes the following topics:

    • An overview of mobile robots and a breakdown of different types of mobile robots used in logistics
    • Exploring the reasons behind the rapid growth of mobile robots in logistics and the benefits they offer to various industries
    • Discussion of the fundamental technologies and components that empower mobile robot systems
    • Analyzing the present state of the market, including noteworthy developments such as acquisitions and new product launches by industry leaders
    • Delving into the obstacles and regulatory considerations that mobile robots face, along with a timeline for addressing these challenges
    • Providing an overview of IDTechEx’s independent forecast for the future of mobile robots in the logistics sector

    Click here to find out more and register your place on one of the three sessions on the 10th October.

    About IDTechEx

    IDTechEx guides your strategic business decisions through its Research, Subscription and Consultancy products, helping you profit from emerging technologies. For more information, contact research@IDTechEx.com or visit www.IDTechEx.com.

    The post Mobile Robotics: Increasing Flexibility Enables Increasing Efficiency in Logistics appeared first on Edge AI and Vision Alliance.

    ]]>
    The MEMS Industry: Looking Back at the Last 20 Years of Innovation and Growth https://www.edge-ai-vision.com/2023/09/the-mems-industry-looking-back-at-the-last-20-years-of-innovation-and-growth/ Fri, 15 Sep 2023 20:20:59 +0000 https://www.edge-ai-vision.com/?p=43890 This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group. Overcoming a global economic downturn, the MEMS market is set to grow to US$20 billion by 2028 as MEMS allow OEMs in the consumer, automotive, and other industries to optimize the cost, …

    The MEMS Industry: Looking Back at the Last 20 Years of Innovation and Growth Read More +

    The post The MEMS Industry: Looking Back at the Last 20 Years of Innovation and Growth appeared first on Edge AI and Vision Alliance.

    ]]>
    This market research report was originally published at the Yole Group’s website. It is reprinted here with the permission of the Yole Group.

    Overcoming a global economic downturn, the MEMS market is set to grow to US$20 billion by 2028 as MEMS allow OEMs in the consumer, automotive, and other industries to optimize the cost, size, and performance of their systems.

    In today’s article, Pierre Delbos and Pierre-Marie Visse from Yole Intelligence and Khrystyna Kruk from Yole SystemPlus reflect on the last 20 years of the MEMS market, highlighting the evolving drivers, the most significant MEMS innovations, and the future areas set to drive MEMS demand.

    This analysis forms part of the report, Status of the MEMS Industry 2023, along with related reverse engineering & costing analyses, including:

    These products are fully based on Yole Group’s 20 years of expertise in the MEMS market and technologies.

    Yole Intelligence and Yole SystemPlus are part of Yole Group.

    Increased market readiness of emerging devices will drive the MEMS market of the future

    In addition to the market being driven by a growing number of devices and increased MEMS-enabled functionality, we expect to see a rise in market adoption of other devices that will fuel additional demand in the near future.

    For example, MEMS microspeakers offer potential for their very small footprint but have typically not performed as well as legacy technology at low frequencies. As microspeaker companies start to improve the performance and achieve design wins – such as within TWS headphones and hearing aids – the volumes could push down the price such that they can compete with traditional speakers.

    Furthermore, as emerging systems edge closer to mass adoption, we expect to see greater MEMS adoption.

    For example, with the high-volume arrival of AR glasses in 2030, Yole Group expects MEMS micromirrors for LBS microprojection to act as a stop-gap solution (competing with LCoS) before microLEDs become viable.

    “While MEMS gas sensors traditionally have performed worse than other sensors, we expect them to find a place within low-cost, low-power-consumption applications with space constraints and commoditization of consumer apps. Applications such as monitoring indoor air quality, where there is renewed interest following the COVID-19 pandemic, could help drive penetration in this market.”
    Pierre-Marie Visse
    Technology and Market Analyst, Sensing and Actuating, Yole Intelligence (part of the Yole Group)

    Another noticeable trend is the transition from 8 to 12-inch MEMS manufacturing. While this involves a significant investment for MEMS players, it allows for better integration with 12-inch CMOS wafers and supports optimal equipment performance, such as for lithography and DRIE steps.

    Though this report has existed in various iterations for 20 years now, the MEMS industry is still bringing disruptive technologies and applications to the table. To succeed in this industry, it is necessary to meet the market’s requirements: a growing total accessible market, a technology with an actual added value for specific applications, and enough funding and support to sustain fierce competition against an established industry. But with all these new players, technologies, and megatrends, the future of the MEMS industry looks bright!

    Back to the future: How MEMS market drivers have evolved in 20 years

    The success of MEMS technologies is indisputable, and MEMS devices continue to be increasingly adopted in almost all markets.

    “With the ongoing global demand for sensorization and data-driven applications, the past 20 years during which we have analyzed the MEMS industry have shown continuous innovation and even opened new product perspectives. Over the years, various market drivers, successive crises, and ecosystem changes shaped the US$14 billion MEMS industry of today.”
    Pierre Delbos
    Technology and Market Analyst, Sensing and Actuating, Yole Intelligence (part of the Yole Group)

    In 2003, automotive was a major driver as more advanced safety features started to be incorporated into vehicles. Accelerometers used in airbags, gyroscopes for ESP systems, and the early adoption of pressure sensors for tire pressure monitoring were some of the first automotive applications for MEMS.

    MEMS inkjet printheads from HP created significant demand around this time, surpassing automotive in terms of volume and sales: around US$1 billion, compared to US $800 million for automotive. However, one of the most symbolic MEMS devices in the early 2000s was Texas Instruments’ DLP for projection applications that created significant sales, about US$300 million, and Texas Instruments became an early market leader along with HP.

    The arrival of the iPhone in 2007 and subsequent widespread adoption of smartphones caused a surge in MEMS demand in the consumer sector, which is the largest market today. Automatic screen rotation created early demand for accelerometers, and the introduction of more advanced smartphone features such as navigation assistance, step counting, and gaming further fueled the demand for inertial sensors. MEMS microphones also started to be used in smartphones – the Motorola Razr was one of the first adopters – and ended up as one of the most shipped MEMS devices in the industry. Microphones were the perfect example of how MEMS’ small size and low power consumption matched smartphone requirements.

    A second wave in the consumer electronics industry started in 2016 and saw increased adoption of wearable devices, such as smartwatches and TWS headphones, which also boosted the MEMS industry. In addition to microphones that allow beamforming, inertial sensors were used in TWS headphones for bone conduction sensing, allowing perfect voice pick-up and 3D audio functionalities. This second consumer wave and subsequent surge in demand for inertial MEMS sensors allowed STMicroelectronics and Bosch to take the leading market positions from early leaders such as TI and HP, with revenues surpassing US$1 billion (STMicroelectronics) and nearly US$2 billion (Bosch) today.

    Pierre Delbos adds: “As we wait for the consumer market to breathe new life following the recent fall in smartphone demand, the automotive market is now leading growth in the MEMS industry. Automotive is being propelled by the electrification of cars and the introduction of autonomous driving, leading to the complete sensorization of vehicles.”

    Indeed, even though more pressure sensors were used in an ICE car than in an electric vehicle, we expect other car domains to undergo massive transformation and reshuffle the needs for MEMS. Autonomous automotive functions are driving the adoption of MEMS inertial sensors, micromirrors, magnetometers, and more. MEMS oscillators are also increasingly being used to support the exchange of the ever-growing amount of data in the automotive sector and within telecommunications, particularly with the arrival of 5G.

    Technology outlook: mature, but innovation ongoing across the entire value chain

    MEMS technologies are well established in their target markets. But despite the maturity of MEMS components such as pressure sensors, inertial sensors, and microphones, notable innovation is taking place, allowing OEMs to optimize cost, size, and performance, further driving demand.

    “Innovations are occurring across the entire MEMS manufacturing process, from the choice of material all the way to the final integration of dies in the package, while noticeable technology trends have developed.”
    Khrystyna Kruck
    Technology and Cost Analyst, Yole SystemPlus (part of the Yole Group)

    From the design and material standpoint, MEMS players are innovating to improve the performance of their devices. For example, Infineon Technologies sealed dual-membrane technology within Goertek’s MEMS microphone in Apple Airpods Pro significantly improves SNR performance.

    By moving away from its single-back (2010) and dual-back (2014) designs to a sealed dual-membrane (2019) structure, water and dust are prevented from being trapped between the membrane and the backplate, enabling practically noise-free audio signal capturing (68-75dB(A)). In addition, Vesper’s mono-membrane design inside its MEMS piezoelectric microphone represents a radical technology shift, improving water- and dustproofing for better performance in applications that require high robustness and reliability. This possibly explains why the start-up seduced the 3rd biggest MEMS player, Qualcomm.

    But MEMS leaders are also innovating on the integration side. From 2015 to 2018, Bosch’s pressure sensors in the Apple iPhone moved from LGA packaging to an o-ring waterproof package, allowing Apple to improve durability. During this period, Bosch more than halved the size of the MEMS die, from 0.8mm2 to 0.35mm2, following an industry pattern to miniaturize. And from 2018 to 2023, iPhone pressure sensors had the same footprint but were packaged differently. Bosch’s 2023 BMPxxx v2 MEMS pressure sensor die in the latest iPhones, for example, is glued with an adhesive on the ASIC die, which is then glued onto the ceramic substrate, with the electrical connections using wire bondings. Previous versions used flip-chip bonding for the ASIC integration inside the package.

    Bosch even came out with a new manufacturing technique! Its new laser reseal process significantly reduces pressure variation to maximize the performance of inertial sensors within the iPhone 14 Pro. While the process is three times more expensive than previous processes, it allows MEMS gyroscopes and accelerometers to be integrated onto the same die, thus enabling further miniaturization of the sensor and better control of the vacuum level inside the cavity.

    Stay tuned on yolegroup.com!


    Yole Group will attend the MEMS & Imaging Sensors Forum, powered by SEMI. Analysts will be pleased to present key results of the annual MEMS report with a dedicated presentation on Sep. 20, “Future of the MEMS Technology” session. Yole Group’s speaker, Pierre-Marie Visse, will explore the latest and future MEMS trends for a smarter world…

    Analysts are glad to meet Yole Group’s customers and business partners, establish new contacts, and help drive business forward during the show.

    Come and meet them and look through the latest market, technology, reverse engineering and reverse costing analyses. Ask for a meeting at Yole Group’s booth: events@yolegroup.com.

    The post The MEMS Industry: Looking Back at the Last 20 Years of Innovation and Growth appeared first on Edge AI and Vision Alliance.

    ]]>
    The Global Market for Lidar in Autonomous Vehicles Will Grow to US$8.4 Billion by 2033 https://www.edge-ai-vision.com/2023/09/the-global-market-for-lidar-in-autonomous-vehicles-will-grow-to-us8-4-billion-by-2033/ Thu, 14 Sep 2023 11:47:18 +0000 https://www.edge-ai-vision.com/?p=43823 The demand for lidars to be adopted in the automotive industry drives huge investment and rapid progression, with innovations in beam steering technologies, performance improvement, and cost reduction in lidar transceiver components. These efforts can enable lidars to be implemented in a wider application scenario beyond conventional usage and automobiles. However, the rapidly evolving lidar …

    The Global Market for Lidar in Autonomous Vehicles Will Grow to US$8.4 Billion by 2033 Read More +

    The post The Global Market for Lidar in Autonomous Vehicles Will Grow to US$8.4 Billion by 2033 appeared first on Edge AI and Vision Alliance.

    ]]>
    The demand for lidars to be adopted in the automotive industry drives huge investment and rapid progression, with innovations in beam steering technologies, performance improvement, and cost reduction in lidar transceiver components. These efforts can enable lidars to be implemented in a wider application scenario beyond conventional usage and automobiles.

    However, the rapidly evolving lidar technologies and markets leave many uncertain questions to answer. The technology landscape is cluttered with numerous options for every component in a lidar system.

    In the report “Lidar 2023-2033: Technologies, Players, Markets & Forecasts”, experts at IDTechEx have identified four important technology choices that every lidar player and lidar user must make: measurement process, laser, beam steering mechanism, and photodetector.  Dr Xiaoxi He, IDTechEx Research Director and lead author of the report, comments, “The technology choices made today will have immense consequences for performance, price, and scalability of lidar in the future. The present state of the lidar market is unsustainable because winning technologies and players will inevitably emerge, consolidating the technology and business landscapes.”

    IDTechEx research in “Lidar 2023-2033: Technologies, Players, Markets & Forecasts” finds that the global market for 3D lidar in automotive will grow to US$8.4 billion by 2033.

    The report presents an unbiased analysis of primary data gathered via interviews with key players and builds on IDTechEx’s expertise in the transport, electronics, and photonics sectors. While the market analysis and forecasts focus on the automotive industry, the technology analysis and company profiles also cover lidar for industrial automation, robotics, smart city, security, and mapping. For more information on lidar, including downloadable report sample pages, please visit www.IDTechEx.com/Lidar.

    About IDTechEx

    IDTechEx guides your strategic business decisions through its Research, Subscription and Consultancy products, helping you profit from emerging technologies. For more information, contact research@IDTechEx.com or visit www.IDTechEx.com.

    The post The Global Market for Lidar in Autonomous Vehicles Will Grow to US$8.4 Billion by 2033 appeared first on Edge AI and Vision Alliance.

    ]]>