Sensors - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/technologies/sensors/ Designing machines that perceive and understand. Mon, 09 Oct 2023 13:01:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Sensors - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/technologies/sensors/ 32 32 “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance https://www.edge-ai-vision.com/2023/10/mipi-csi-2-image-sensor-interface-standard-features-enable-efficient-embedded-vision-systems-a-presentation-from-the-mipi-alliance/ Mon, 09 Oct 2023 08:00:35 +0000 https://www.edge-ai-vision.com/?p=44276 Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems” tutorial at the May 2023 Embedded Vision Summit. As computer vision applications continue to evolve rapidly, there’s a growing need for a smarter standardized interface connecting… “MIPI CSI-2 Image Sensor Interface …

“MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance Read More +

The post “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance appeared first on Edge AI and Vision Alliance.

]]>
Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems” tutorial at the May 2023 Embedded Vision Summit. As computer vision applications continue to evolve rapidly, there’s a growing need for a smarter standardized interface connecting…

“MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance appeared first on Edge AI and Vision Alliance.

]]>
Sony Semiconductor Solutions Concludes Pedestrian Safety Challenge, Announces Winners with tinyML Foundation and The City of San José https://www.edge-ai-vision.com/2023/10/sony-semiconductor-solutions-concludes-pedestrian-safety-challenge-announces-winners-with-tinyml-foundation-and-the-city-of-san-jose/ Fri, 06 Oct 2023 23:22:49 +0000 https://www.edge-ai-vision.com/?p=44440 Sony Reveals Leopard Imaging, NeurOHM, and King Abdullah University of Science and Technology, as winners of Tech for Good competition in support of the city’s Vision Zero initiatives. SAN JOSÉ, Calif., Oct. 5, 2023 /PRNewswire/ — Today, Sony Semiconductor Solutions America (SSS-A), alongside the tinyML Foundation and The City of San José, announced the final winners …

Sony Semiconductor Solutions Concludes Pedestrian Safety Challenge, Announces Winners with tinyML Foundation and The City of San José Read More +

The post Sony Semiconductor Solutions Concludes Pedestrian Safety Challenge, Announces Winners with tinyML Foundation and The City of San José appeared first on Edge AI and Vision Alliance.

]]>
Sony Reveals Leopard Imaging, NeurOHM, and King Abdullah University of Science and Technology, as winners of Tech for Good competition in support of the city’s Vision Zero initiatives.

SAN JOSÉ, Calif., Oct. 5, 2023 /PRNewswire/ — Today, Sony Semiconductor Solutions America (SSS-A), alongside the tinyML Foundation and The City of San José, announced the final winners for the Pedestrian Safety Challenge Hackathon competition, which began in May as an effort to reduce pedestrian-involved accidents, in connection with the city’s Vision Zero initiatives.

In collaboration, the three groups joined together to encourage teams across the globe to solve for this issue, as pedestrian injuries and fatalities have become more common with issues like distracted driving, distracted walking, illegally crossing roadways, and more.

The hackathon boasted 29 participating teams from across the globe, including the United States., Germany, Lebanon, Nigeria, and Saudi Arabia, as well as teams local to the Silicon Valley and the San Francisco Bay Area (SFBA).

Mark Hanson, Vice President and Head of Marketing for System Solution Business Development at SSS-A states, “It was a pleasure to partner with tinyML and the City of San José on the important issue of pedestrian safety, especially as a native resident and with Sony Electronics’ office in the city. The groundbreaking, people-first solutions coming from these teams makes us optimistic, not just in local Vision Zero efforts, but to see these technologies be used to benefit communities around the globe.”

First place was awarded to the Leopard Imaging team, presenting a solution that features SSS’s AITRIOS™ platform and IMX500-enabled hardware, with the NeurOHM team as second place, team from King Abdullah University of Science and Technology (KAUST) in third place, and special Edge Impulse award going to the KAUST team.

Evgeni Gousev, Senior Director at Qualcomm, and Chair of the Board of Directors at tinyML Foundation says, “As a global non-profit organization with a mission to accelerate development and adoption of energy-efficient, sustainable machine learning technologies, we were enthusiastic for this collaboration with the City of San José, Sony, and other partner companies. We were very pleased to see a strong response from the tinyML Community, are grateful to all the teams and participants who have contributed their ideas and proposals for this real-world problem and would like to congratulate the finalists on delivering innovative-yet-practical solutions.”

Hanson continues, “It was very exciting for us that Leopard Imaging entered with an AITRIOS-built solution and won first place in the Hackathon. It shows that vision AI tools, like AITRIOS, can make these Vision Zero and pedestrian safety goals a tangible, low-cost, and scale-based platform to support these initiatives.”

“Through our partnership with Sony and tinyML, brilliant minds from across the world have generated ideas that will ultimately save lives in San José and beyond,” said San José Mayor, Matt Mahan.

To learn more about the Pedestrian Safety Challenge and its winning solutions, please visit the tinyML Foundation website, here.

About Sony Semiconductor Solutions America

Sony Semiconductor Solutions America is part of Sony Semiconductor Solutions Group, today’s global leader in image sensors. We strive to provide advanced imaging technologies that bring greater convenience and joy to people’s lives. In addition, we also work to develop and bring to market new kinds of sensing technologies with the aim of offering various solutions that will take the visual and recognition capabilities of both humans and machines to greater heights. Visit us at: https://www.sony-semicon.co.jp/e/

Corporate slogan “Sense the Wonder”: https://www.sony-semicon.co.jp/e/company/vision

The post Sony Semiconductor Solutions Concludes Pedestrian Safety Challenge, Announces Winners with tinyML Foundation and The City of San José appeared first on Edge AI and Vision Alliance.

]]>
“Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance https://www.edge-ai-vision.com/2023/10/introduction-to-the-csi-2-image-sensor-interface-standard-a-presentation-from-the-mipi-alliance/ Fri, 06 Oct 2023 08:00:02 +0000 https://www.edge-ai-vision.com/?p=44232 Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “Introduction to the MIPI CSI-2 Image Sensor Interface Standard” tutorial at the May 2023 Embedded Vision Summit. By taking advantage of select features in standardized interfaces, vision system architects can help reduce processor load, cost and power consumption… “Introduction to the CSI-2 Image …

“Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance Read More +

The post “Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance appeared first on Edge AI and Vision Alliance.

]]>
Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “Introduction to the MIPI CSI-2 Image Sensor Interface Standard” tutorial at the May 2023 Embedded Vision Summit. By taking advantage of select features in standardized interfaces, vision system architects can help reduce processor load, cost and power consumption…

“Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance appeared first on Edge AI and Vision Alliance.

]]>
Five Years for Vision Components and MIPI: New MIPI Camera Module for Highest Image Quality https://www.edge-ai-vision.com/2023/10/five-years-for-vision-components-and-mipi-new-mipi-camera-module-for-highest-image-quality/ Thu, 05 Oct 2023 18:01:27 +0000 https://www.edge-ai-vision.com/?p=44341 Ettlingen, October 5th, 2023  – Five years ago, Vision Components presented the first MIPI cameras for industrial series applications. Today, the manufacturer from Ettlingen in Germany offers more than 20 different image sensors as MIPI modules. Brand new is the VC MIPI IMX585, which offers best image quality in all lighting conditions with 4K image …

Five Years for Vision Components and MIPI: New MIPI Camera Module for Highest Image Quality Read More +

The post Five Years for Vision Components and MIPI: New MIPI Camera Module for Highest Image Quality appeared first on Edge AI and Vision Alliance.

]]>
Ettlingen, October 5th, 2023  – Five years ago, Vision Components presented the first MIPI cameras for industrial series applications. Today, the manufacturer from Ettlingen in Germany offers more than 20 different image sensors as MIPI modules. Brand new is the VC MIPI IMX585, which offers best image quality in all lighting conditions with 4K image resolution and high dynamic range. The company also announces that the VC Lib image processing software will soon be freely available to all customers.

For more information: www.mipi-modules.com

VC MIPI IMX585: 4K resolution and highest dynamic range

The VC MIPI IMX585 Camera is based on the Sony Starvis-2 IMX585 image sensor and offers an image resolution of 8.4 megapixels, 4K and HDR support. The sensor has larger pixels than comparable modules and delivers with its 88dB dynamic range high image quality in all lighting conditions. The camera with MIPI CSI-2 interface is thus ideally suited for AI-based medical applications and other demanding vision tasks. It can be configured as a color or monochrome camera and will be available in quantities towards the end of the year.

Designed for industrial mass production

The VC MIPI camera modules are developed and manufactured by Vision Components in Ettlingen near Karlsruhe, Germany. They offer high quality, robust and industry-optimized design as well as long-term availability. Via the MIPI CSI-2 interface, the MIPI camera modules can be connected to all common processor platforms. Corresponding drivers are provided by Vision Components free of charge.

Comprehensive accessories and individual sensors on request

Vision Components also supplies high-performance cables and accessories perfectly matching the MIPI cameras from a single source. The smart components enable vision OEMs to bring their projects to market faster, easier and more cost-efficiently. The manufacturer is continuously adding more image sensors to its VC MIPI portfolio, including for applications such as SWIR and 3D/ToF. Upon customer request, VC also integrates special sensors into MIPI modules, even those that do not natively support a MIPI interface.

VC Lib now open to all customers

In order to support customers even better in the integration of embedded vision, the VC Lib software library will be freely available to all customers of the company. Until now, it was reserved for customers of VC embedded vision systems. VC Lib includes basic functions for image processing applications as well as more complex functions such as pattern recognition or barcode reading. The applications are highly optimized for ARM processor platforms and enable fast and cost-effective development of end applications.

About Vision Components

Vision Components is a leading manufacturer of embedded vision systems with over 25 years of experience. The product range extends from versatile MIPI camera modules to freely programmable cameras with ARM/Linux and OEM systems for 2D and 3D image processing. The company was founded in 1996 by Michael Engel, inventor of the first industrial-grade intelligent camera. VC operates worldwide, with sales offices in the USA, Japan and Dubai as well as local partners in over 25 countries.

The post Five Years for Vision Components and MIPI: New MIPI Camera Module for Highest Image Quality appeared first on Edge AI and Vision Alliance.

]]>
How NVIDIA and e-con Systems are Helping Solve Major Challenges In the Retail Industry https://www.edge-ai-vision.com/2023/10/how-nvidia-and-e-con-systems-are-helping-solve-major-challenges-in-the-retail-industry/ Thu, 05 Oct 2023 12:17:04 +0000 https://www.edge-ai-vision.com/?p=44324 This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. e-con Systems has proven expertise in integrating our cameras into the NVIDIA platform, including Jetson Xavier NX / Nano / TX2 NX, Jetson AGX Xavier, Jetson AGX Orin, and NVIDIA Jetson Orin NX / NANO. …

How NVIDIA and e-con Systems are Helping Solve Major Challenges In the Retail Industry Read More +

The post How NVIDIA and e-con Systems are Helping Solve Major Challenges In the Retail Industry appeared first on Edge AI and Vision Alliance.

]]>
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems.

e-con Systems has proven expertise in integrating our cameras into the NVIDIA platform, including Jetson Xavier NX / Nano / TX2 NX, Jetson AGX Xavier, Jetson AGX Orin, and NVIDIA Jetson Orin NX / NANO. Find out about how our cameras are integrated into the NVIDIA platform, their popular use cases, and how they empower you to solve retail challenges.

In the retail industry, there are numerous challenges, including security risks, inventory management, and enhancing the shopping experience. NVIDIA-powered cameras are helping to address these challenges by providing retailers with real-time data and insights. In addition, these cameras are being used to enhance store security, optimize store layout and staffing, etc.

So, by leveraging the power of the NVIDIA platform, retailers can better understand their customers while improving operations and ultimately providing a more satisfying shopping experience.

In this blog, let’s discover more about the role of e-con Systems’ cameras integrated into the NVIDIA platform, how they help solve some major retail challenges, and their most popular use cases.

Read: e-con Systems launches 3D time of flight camera for NVIDIA Jetson AGX Orin and AGX Xavier

A quick introduction to NVIDIA and e-con Systems’ cameras

NVIDIA has been involved in developing camera sensors for various applications, focusing on AI-powered edge computing and autonomous vehicles. One of their most notable releases is the Jetson Nano Developer Kit (released in 2019). This System-on-Module (processor) is designed for AI-powered edge computing applications like object recognition and autonomous shopping.

As you may already know, e-con Systems has proven expertise in integrating our cameras into the Nvidia platform. We support the entire NVIDIA Jetson family, including Jetson Xavier NX / Nano / TX2 NX, Jetson AGX Xavier, Jetson AGX Orin, and NVIDIA Jetson Orin NX / NANO. e-con Systems’ popular camera solutions come with advanced features, such as dedicated ISP, ultra-low-light performance, low noise, wide temperature range, LED flicker mitigation, bidirectional control, and long-distance transmission.

Benefits of using cameras powered by the NVIDIA platform

    • They work seamlessly with their powerful GPUs, which are optimized for processing large amounts of data in real time. This allows for advanced image processing and analysis, making it possible for machines to “see” and understand their surroundings with greater accuracy and speed.
    • They are capable of capturing high-quality data that can be used to train deep neural networks. So they can then be used for tasks such as object detection and recognition.
    • They are designed to be low-power and compact, making them ideal for use in embedded vision applications. This is particularly important for applications such as smart trolleys and smart checkout systems.
    • They are highly customizable, letting developers tailor them to specific applications and use cases. This flexibility makes it possible to create embedded vision solutions that are optimized for specific tasks and environments, providing better performance and reliability.

Read: Popular embedded vision use cases of NVIDIA® Jetson AGX Orin™

Major retail use cases of NVIDIA and e-con Systems

Smart Checkout

e-con Systems’ cameras, powered by the NVIDIA platform, are transforming smart checkout systems by enabling faster, more accurate, and more efficient checkout experiences for customers. Firstly, they can be used to enable contactless checkout, reducing the risk of transmission of infectious diseases. So, customers can avoid touching checkout equipment and interacting with cashiers, reducing the risk of transmission.

These smart checkout systems usually refer to a camera-enabled automated object detection system at the billing or checkout counter. They can operate autonomously with limited supervision from human staff – offering benefits like effective utilization of the retail staff, enhanced shopping experience, data insights on shopping patterns, and more. The integrated camera is equipped with smart algorithms to detect a wide variety of objects in a retail store.

Read: Key camera-related features of smart trolley and smart checkout systems

Smart Trolley

NVIDIA cameras are changing the game for retailers by providing real-time insights into customer behavior and preferences through the use of smart trolleys. These trolleys equipped with cameras and sensors help identify products or the barcode on each item – enabling the customers to pay in the same cart. This can greatly reduce wait times and improve overall customer satisfaction.

Moreover, the data collected by these cameras can enable retailers to offer personalized product recommendations and promotions based on past purchases and interactions. This personalized approach can increase sales and customer loyalty.

Another significant advantage of NVIDIA cameras in smart trolleys is enhanced store security. The cameras can detect and track suspicious activity in real time, such as items being removed from trolleys without payment or abandoned trolleys blocking store aisles.

Read: How embedded vision is contributing to the smart retail revolution

Other retail use cases include:

    • Optimized store operations and improved inventory management: With real-time data on store traffic and product placement, retailers can make informed decisions about store layout, staffing, and inventory management, leading to more efficient operations and reduced costs.
    • Personalized shopping experiences for customers: By analyzing customer behavior through imaging detail and preferences, retailers can offer personalized product recommendations and promotions. In turn, this leads to increased sales and customer satisfaction.

As the technology continues to evolve, it is likely that we will see even more innovative applications of NVIDIA-powered cameras in the retail industry.

NVIDIA and e-con Systems: An ongoing multi-year Elite partnership

NVIDIA and e-con Systems together have formed a one-stop ecosystem – providing USB, MIPI, GMSL, GigE, and FPD Link camera solutions across several industries and significantly reducing time-to-market. This multi-year Elite partnership started with Jetson Nano (40 TOPS) and continues strong with AGX Orin (100 TOPS).

Explore our NVIDIA Jetson-based cameras

If you are looking for an expert to help integrate NVIDIA cameras into your embedded vision products, please write to camerasolutions@e-consystems.com. You can also check out our Camera Selector page to get a full view of e-con Systems’ camera portfolio.

Ranjith Kumar
Camera Solution Architect, e-con Systems

The post How NVIDIA and e-con Systems are Helping Solve Major Challenges In the Retail Industry appeared first on Edge AI and Vision Alliance.

]]>
FRAMOS Launches Event-based Vision Sensing (EVS) Development Kit https://www.edge-ai-vision.com/2023/10/framos-launches-event-based-vision-sensing-evs-development-kit/ Wed, 04 Oct 2023 14:30:44 +0000 https://www.edge-ai-vision.com/?p=44293 [Munich, Germany / Ottawa, Canada , 4 October] — FRAMOS launched the FSM-IMX636 Development Kit, an innovative platform allowing developers to explore the capabilities of Event-based Vision Sensing (EVS) technology and test potential benefits of using the technology on NVIDIA® Jetson with the FRAMOS sensor module ecosystem. Built around SONY and PROPHESEE’s cutting-edge EVS technology, …

FRAMOS Launches Event-based Vision Sensing (EVS) Development Kit Read More +

The post FRAMOS Launches Event-based Vision Sensing (EVS) Development Kit appeared first on Edge AI and Vision Alliance.

]]>
[Munich, Germany / Ottawa, Canada , 4 October] — FRAMOS launched the FSM-IMX636 Development Kit, an innovative platform allowing developers to explore the capabilities of Event-based Vision Sensing (EVS) technology and test potential benefits of using the technology on NVIDIA® Jetson with the FRAMOS sensor module ecosystem.

Built around SONY and PROPHESEE’s cutting-edge EVS technology, this developer kit simplifies the prototyping process and helps companies reduce time to market.

Event-based Vision Sensing (EVS)

Unlike conventional sensors that transmit all visible data in successive frames, the EVS sensor captures only the changed pixel data, specifically luminance changes. Each event package includes crucial information: pixel coordinates, timestamp, and polarity, resulting in efficient bandwidth usage.

By reducing the transmission of redundant data, this technology lowers energy consumption and optimizes processing capacities, reducing the cost of vision solutions.

EVS sensors provide high-speed and low-latency data output. They give outstanding results in monitoring vibration and movement in low-light conditions.

The FSM-IMX636 Development Kit consists of an IMX636 Event-based Vision Sensor board with a lens, all necessary adapters, accessories, and drivers, crafted into a comprehensive, easy-to-integrate solution for testing EVS in embedded applications systems on NVIDIA® Jetson AGX Xavier™ and NVIDIA® Jetson AGX Orin platforms.

The PROPHESEE Metavision® Intelligence Suite provides machine learning-supported event data processing, analytics, and visualization modules.

FRAMOS’ new Development Kit is an affordable, simple to use, and intelligent platform for testing, prototpying, and faster launch of diverse EVS-based applications in in a wide range of fields, including industrial automation, medical field, automotive and mobility, and IoT and monitoring.

For more information, visit this link.

About FRAMOS

FRAMOS® is the leading global expert in vision systems, dedicated to innovation and excellence in enabling devices to see and think.

For more than 40 years, the company has supported clients worldwide in building pioneering vision systems.

Throughout all phases of vision system development, from hardware and software solutions to component selection, customization, consulting, prototyping, and mass production, companies worldwide rely on FRAMOS proven expertise.

Thanks to its engineering excellence and a large base of loyal clients, the company operates successfully on three continents.

Over 180 experts working in Munich, Ottawa, Zagreb, and Čakovec offices commit themselves to developing cutting-edge imaging solutions for various applications across various industries.

For more information, please visit www.framos.com or follow us on LinkedIn, Facebook, Instagram or Twitter.

 

The post FRAMOS Launches Event-based Vision Sensing (EVS) Development Kit appeared first on Edge AI and Vision Alliance.

]]>
“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere https://www.edge-ai-vision.com/2023/10/optimizing-image-quality-and-stereo-depth-at-the-edge-a-presentation-from-john-deere/ Wed, 04 Oct 2023 08:00:49 +0000 https://www.edge-ai-vision.com/?p=44222 Travis Davis, Delivery Manager in the Automation and Autonomy Core, and Tarik Loukili, Technical Lead for Automation and Autonomy Applications, both of John Deere, present the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. John Deere uses machine learning and computer vision (including stereo… “Optimizing Image Quality and Stereo …

“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere Read More +

The post “Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere appeared first on Edge AI and Vision Alliance.

]]>
Travis Davis, Delivery Manager in the Automation and Autonomy Core, and Tarik Loukili, Technical Lead for Automation and Autonomy Applications, both of John Deere, present the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. John Deere uses machine learning and computer vision (including stereo…

“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere appeared first on Edge AI and Vision Alliance.

]]>
“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI https://www.edge-ai-vision.com/2023/10/using-a-collaborative-network-of-distributed-cameras-for-object-tracking-a-presentation-from-invision-ai/ Tue, 03 Oct 2023 08:00:38 +0000 https://www.edge-ai-vision.com/?p=44217 Samuel Örn, Team Lead and Senior Machine Learning and Computer Vision Engineer at Invision AI, presents the “Using a Collaborative Network of Distributed Cameras for Object Tracking” tutorial at the May 2023 Embedded Vision Summit. Using multiple fixed cameras to track objects requires a careful solution design. To enable scaling… “Using a Collaborative Network of …

“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI Read More +

The post “Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI appeared first on Edge AI and Vision Alliance.

]]>
Samuel Örn, Team Lead and Senior Machine Learning and Computer Vision Engineer at Invision AI, presents the “Using a Collaborative Network of Distributed Cameras for Object Tracking” tutorial at the May 2023 Embedded Vision Summit. Using multiple fixed cameras to track objects requires a careful solution design. To enable scaling…

“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI appeared first on Edge AI and Vision Alliance.

]]>
ProHawk Technology Group Overview of AI-enabled Computer Vision Restoration https://www.edge-ai-vision.com/2023/10/prohawk-technology-group-overview-of-ai-enabled-computer-vision-restoration/ Mon, 02 Oct 2023 17:55:45 +0000 https://www.edge-ai-vision.com/?p=44259 Brent Willis, Chief Operating Officer of the ProHawk Technology Group, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Willis discusses the company’s AI-enabled computer vision restoration technology. ProHawk’s patented algorithms and technologies enable real-time, pixel-by-pixel video restoration, overcoming virtually all environmental …

ProHawk Technology Group Overview of AI-enabled Computer Vision Restoration Read More +

The post ProHawk Technology Group Overview of AI-enabled Computer Vision Restoration appeared first on Edge AI and Vision Alliance.

]]>
Brent Willis, Chief Operating Officer of the ProHawk Technology Group, demonstrates the company’s latest edge AI and vision technologies and products at the September 2023 Edge AI and Vision Alliance Forum. Specifically, Willis discusses the company’s AI-enabled computer vision restoration technology.

ProHawk’s patented algorithms and technologies enable real-time, pixel-by-pixel video restoration, overcoming virtually all environmental obstacles to maximize end user insight and productivity. ProHawk’s software works within less than a blink of an eye (less than 3 milliseconds) allowing greater and earlier object detection, positioning the company to capture a meaningful share of the $135B computer vision market.

The ProHawk Technology Group has recently partnered with Dell, NVIDIA, and select other companies to bring technologies to market across a range of applications, including medical, military, law enforcement, safety and security, and other industries.

The post ProHawk Technology Group Overview of AI-enabled Computer Vision Restoration appeared first on Edge AI and Vision Alliance.

]]>
Emerging Image Sensor Technologies 2024-2034: Applications and Markets https://www.edge-ai-vision.com/2023/10/emerging-image-sensor-technologies-2024-2034-applications-and-markets/ Mon, 02 Oct 2023 16:22:20 +0000 https://www.edge-ai-vision.com/?p=44247 For more information, visit https://www.idtechex.com/en/research-report/emerging-image-sensor-technologies-2024-2034-applications-and-markets/965. Covering image sensors, SWIR, hybrid sensors, hyperspectral imaging, event-based vision, phase imaging, OPD, hybrid sensors, CCD, thin film photodetectors, organic and perovskite photodetectors, InGaAs, quantum image sensors, and miniaturized spectrometer IDTechEx’s “Emerging Image Sensor Technologies 2024-2034: Applications and Markets” report evaluates the most diverse range of image sensors with varying …

Emerging Image Sensor Technologies 2024-2034: Applications and Markets Read More +

The post Emerging Image Sensor Technologies 2024-2034: Applications and Markets appeared first on Edge AI and Vision Alliance.

]]>
For more information, visit https://www.idtechex.com/en/research-report/emerging-image-sensor-technologies-2024-2034-applications-and-markets/965.

Covering image sensors, SWIR, hybrid sensors, hyperspectral imaging, event-based vision, phase imaging, OPD, hybrid sensors, CCD, thin film photodetectors, organic and perovskite photodetectors, InGaAs, quantum image sensors, and miniaturized spectrometer

IDTechEx’s “Emerging Image Sensor Technologies 2024-2034: Applications and Markets” report evaluates the most diverse range of image sensors with varying resolutions and wavelength sensitivities. This technology is set to impact multiple industries from healthcare, biometrics, autonomous driving, agriculture, chemical sensing, and food inspection, among several others. The growing importance of these technologies is expected to contribute towards the growth of this market, with projections of it reaching US$739 million by 2034. This figure is, in fact, conservative because a much larger market value is predicted if these sensors take-off I the consumer electronics sector. As an example, the QD-on-CMOS market would note a 25X increase in revenue value if the consumer electronics space is considered.

Primary insight from interviews with individual players, ranging from established players to innovative start-ups, is included alongside 25 detailed company profiles that include discussion of both technology and business models and SWOT analysis. Additionally, the report includes technological and commercial readiness assessments, split by technology and application. It also discusses the commercial motivation for developing and adopting each of the emerging image sensing technologies and evaluates the barriers to entry.


Overview of emerging sensing technologies covered in this report.

Fundamental topics are covered throughout this report including the evaluation of individual technology readiness levels as well as detailed SWOT analyses for each technology.

From these insights it is possible to predict which technologies are most likely to succeed and which companies have positioned themselves in a more competitive position to thrive in the market.

This report also covers different applications that will benefit from these technologies as well as key challenges they may face in commercializing their products. The rate at which autonomy develops, for instance, will be partly dependent on the maturity of these sensors in the medium to long-term. Increased sensor maturity is synonymous with more cost effective and advanced technology, i.e., more sensitive sensors.

Emerging Image Sensors Go Beyond Visible/IR

While conventional CMOS detectors for visible light are well established and somewhat commoditized, at least for low value applications, there is an extensive opportunity for more complex image sensors that offer capabilities beyond that of simply acquiring red, green, and blue (RGB) intensity values. As such, extensive effort is currently being devoted to developing emerging image sensor technologies that can detect aspects of light beyond human vision. This includes imaging over a broader spectral range, over a larger area, acquiring spectral data at each pixel, and simultaneously increasing temporal resolution and dynamic range.

Much of this opportunity stems from the ever-increasing adoption of machine vision, in which image analysis is performed by computational algorithms. Machine learning requires as much input data as possible to establish correlations that can facilitate object identification and classification, so acquiring optical information over a different wavelength range, or with spectral resolution for example, is highly advantageous.

Emerging image sensor technologies offer many other benefits. Depending on the technology this can include similar capabilities at a lower cost, increased dynamic range, improve temporal resolution, spatially variable sensitivity, global shutters at high resolution, reducing the unwanted influence of scattering, flexibility/conformality, and more. A particularly important trend is the development of much cheaper alternatives to very expensive InGaAs sensors for imaging in the short-wave infra-red (SWIR, 1000-2000 nm) spectral region, which will open this capability to a much wider range of applications. This includes autonomous vehicles, in which SWIR imaging assists with distinguishing objects/materials that appear similar in the visible spectrum, while also reducing scattering from dust and fog.

There are several competitive emerging SWIR technologies. These include hybrid image sensors where an additional light absorbing thin film layer made of organic semiconductors or quantum dots is placed on top of a CMOS read-out circuit to increase the wavelength detection range into the SWIR region. Another technology is extended-range silicon where the properties of silicon are modified to extend the absorption range beyond its bandgap limitations. Currently dominated by expensive InGaAs sensors, these new approaches promise a substantial price reduction which is expected to encourage the adoption of SWIR imaging for new applications such as autonomous vehicles.

Obtaining as much information as possible from incident light is highly advantageous for applications that require object identification, since classification algorithms have more data to work with. Hyperspectral imaging, in which a complete spectrum is acquired at each pixel to product an (x, y, λ) data cube using a dispersive optical element and an image sensor, is a relatively established technology that has gained traction for precision agriculture and industrial process inspection. However, at present most hyperspectral cameras work on a line-scan principle, while SWIR hyperspectral imaging is restricted to relatively niche applications due to the high cost of InGaAs sensors that can exceed US$50,000. Emerging technologies using silicon or thin film materials look set to disrupt both these aspects, with snapshot imaging offering an alternative to line-scan cameras and with the new SWIR sensing technologies method facilitating cost reduction and adoption for a wider range of applications.

Another emerging image sensing technology is event-based vision, also known as dynamic vision sensing (DVS). Autonomous vehicles, drones and high-speed industrial applications require image sensing with a high temporal resolution. However, with conventional frame-based imaging a high temporal resolution produces vast amounts of data that requires computationally intensive processing. Event-based vision is an emerging technology that resolves this challenge. It is a completely new way of thinking about obtaining optical information, in which each sensor pixel reports timestamps that correspond to intensity changes. As such, event-based vision can combine greater temporal resolution of rapidly changing image regions, with much reduced data transfer and subsequent processing requirements.

The report also looks at the burgeoning market of miniaturized spectrometers. Driven by the growth in smart electronics and Internet of Things devices, low-cost miniaturized spectrometers are becoming increasingly relevant across different sectors. The complexity and functionalization of standard visible light sensors can be significantly improved through the integration of miniaturized spectrometers that can detect from the visible to the SWIR region of the spectrum. The future being imagined by researchers at Fraunhofer is a spectrometer weighing just 1 gram and costing a single dollar. Miniaturized spectrometers are expected to deliver inexpensive solutions to improve autonomous efficiency, particularly within industrial imaging and inspection as well as consumer electronics.

IDTechEx has 20 years of expertise covering emerging technologies, including image sensors, thin film materials, and semiconductors. Our analysts have closely followed the latest developments in relevant markets, interviewed key players within the industry, attended conferences, and delivered consulting projects on the field. This report examines the current status and latest trends in technology performance, supply chain, manufacturing know-how, and application development progress. It also identifies the key challenges, competition and innovation opportunities within the image sensor market.

Dr. Miguel El Guendy
Technology Analyst, IDTechEx

Dr. Xiaoxi He
Research Director, Topic Lead, IDTechEx

The post Emerging Image Sensor Technologies 2024-2034: Applications and Markets appeared first on Edge AI and Vision Alliance.

]]>