Edge AI and Vision Alliance - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/edge-ai-and-vision-alliance/ Designing machines that perceive and understand. Mon, 09 Oct 2023 13:01:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Edge AI and Vision Alliance - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/edge-ai-and-vision-alliance/ 32 32 “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance https://www.edge-ai-vision.com/2023/10/mipi-csi-2-image-sensor-interface-standard-features-enable-efficient-embedded-vision-systems-a-presentation-from-the-mipi-alliance/ Mon, 09 Oct 2023 08:00:35 +0000 https://www.edge-ai-vision.com/?p=44276 Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems” tutorial at the May 2023 Embedded Vision Summit. As computer vision applications continue to evolve rapidly, there’s a growing need for a smarter standardized interface connecting… “MIPI CSI-2 Image Sensor Interface …

“MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance Read More +

The post “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance appeared first on Edge AI and Vision Alliance.

]]>
Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems” tutorial at the May 2023 Embedded Vision Summit. As computer vision applications continue to evolve rapidly, there’s a growing need for a smarter standardized interface connecting…

“MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “MIPI CSI-2 Image Sensor Interface Standard Features Enable Efficient Embedded Vision Systems,” a Presentation from the MIPI Alliance appeared first on Edge AI and Vision Alliance.

]]>
“Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance https://www.edge-ai-vision.com/2023/10/introduction-to-the-csi-2-image-sensor-interface-standard-a-presentation-from-the-mipi-alliance/ Fri, 06 Oct 2023 08:00:02 +0000 https://www.edge-ai-vision.com/?p=44232 Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “Introduction to the MIPI CSI-2 Image Sensor Interface Standard” tutorial at the May 2023 Embedded Vision Summit. By taking advantage of select features in standardized interfaces, vision system architects can help reduce processor load, cost and power consumption… “Introduction to the CSI-2 Image …

“Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance Read More +

The post “Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance appeared first on Edge AI and Vision Alliance.

]]>
Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, presents the “Introduction to the MIPI CSI-2 Image Sensor Interface Standard” tutorial at the May 2023 Embedded Vision Summit. By taking advantage of select features in standardized interfaces, vision system architects can help reduce processor load, cost and power consumption…

“Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Introduction to the CSI-2 Image Sensor Interface Standard,” a Presentation from the MIPI Alliance appeared first on Edge AI and Vision Alliance.

]]>
“Practical Approaches to DNN Quantization,” a Presentation from Magic Leap https://www.edge-ai-vision.com/2023/10/practical-approaches-to-dnn-quantization-a-presentation-from-magic-leap/ Thu, 05 Oct 2023 08:00:05 +0000 https://www.edge-ai-vision.com/?p=44227 Dwith Chenna, Senior Embedded DSP Engineer for Computer Vision at Magic Leap, presents the “Practical Approaches to DNN Quantization” tutorial at the May 2023 Embedded Vision Summit. Convolutional neural networks, widely used in computer vision tasks, require substantial computation and memory resources, making it challenging to run these models on… “Practical Approaches to DNN Quantization,” …

“Practical Approaches to DNN Quantization,” a Presentation from Magic Leap Read More +

The post “Practical Approaches to DNN Quantization,” a Presentation from Magic Leap appeared first on Edge AI and Vision Alliance.

]]>
Dwith Chenna, Senior Embedded DSP Engineer for Computer Vision at Magic Leap, presents the “Practical Approaches to DNN Quantization” tutorial at the May 2023 Embedded Vision Summit. Convolutional neural networks, widely used in computer vision tasks, require substantial computation and memory resources, making it challenging to run these models on…

“Practical Approaches to DNN Quantization,” a Presentation from Magic Leap

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Practical Approaches to DNN Quantization,” a Presentation from Magic Leap appeared first on Edge AI and Vision Alliance.

]]>
“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere https://www.edge-ai-vision.com/2023/10/optimizing-image-quality-and-stereo-depth-at-the-edge-a-presentation-from-john-deere/ Wed, 04 Oct 2023 08:00:49 +0000 https://www.edge-ai-vision.com/?p=44222 Travis Davis, Delivery Manager in the Automation and Autonomy Core, and Tarik Loukili, Technical Lead for Automation and Autonomy Applications, both of John Deere, present the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. John Deere uses machine learning and computer vision (including stereo… “Optimizing Image Quality and Stereo …

“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere Read More +

The post “Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere appeared first on Edge AI and Vision Alliance.

]]>
Travis Davis, Delivery Manager in the Automation and Autonomy Core, and Tarik Loukili, Technical Lead for Automation and Autonomy Applications, both of John Deere, present the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. John Deere uses machine learning and computer vision (including stereo…

“Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Optimizing Image Quality and Stereo Depth at the Edge,” a Presentation from John Deere appeared first on Edge AI and Vision Alliance.

]]>
“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI https://www.edge-ai-vision.com/2023/10/using-a-collaborative-network-of-distributed-cameras-for-object-tracking-a-presentation-from-invision-ai/ Tue, 03 Oct 2023 08:00:38 +0000 https://www.edge-ai-vision.com/?p=44217 Samuel Örn, Team Lead and Senior Machine Learning and Computer Vision Engineer at Invision AI, presents the “Using a Collaborative Network of Distributed Cameras for Object Tracking” tutorial at the May 2023 Embedded Vision Summit. Using multiple fixed cameras to track objects requires a careful solution design. To enable scaling… “Using a Collaborative Network of …

“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI Read More +

The post “Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI appeared first on Edge AI and Vision Alliance.

]]>
Samuel Örn, Team Lead and Senior Machine Learning and Computer Vision Engineer at Invision AI, presents the “Using a Collaborative Network of Distributed Cameras for Object Tracking” tutorial at the May 2023 Embedded Vision Summit. Using multiple fixed cameras to track objects requires a careful solution design. To enable scaling…

“Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Using a Collaborative Network of Distributed Cameras for Object Tracking,” a Presentation from Invision AI appeared first on Edge AI and Vision Alliance.

]]>
“A Survey of Model Compression Methods,” a Presentation from Instrumental https://www.edge-ai-vision.com/2023/10/a-survey-of-model-compression-methods-a-presentation-from-instrumental/ Mon, 02 Oct 2023 08:00:57 +0000 https://www.edge-ai-vision.com/?p=44183 Rustem Feyzkhanov, Staff Machine Learning Engineer at Instrumental, presents the “Survey of Model Compression Methods” tutorial at the May 2023 Embedded Vision Summit. One of the main challenges when deploying computer vision models to the edge is optimizing the model for speed, memory and energy consumption. In this talk, Feyzkhanov… “A Survey of Model Compression …

“A Survey of Model Compression Methods,” a Presentation from Instrumental Read More +

The post “A Survey of Model Compression Methods,” a Presentation from Instrumental appeared first on Edge AI and Vision Alliance.

]]>
Rustem Feyzkhanov, Staff Machine Learning Engineer at Instrumental, presents the “Survey of Model Compression Methods” tutorial at the May 2023 Embedded Vision Summit. One of the main challenges when deploying computer vision models to the edge is optimizing the model for speed, memory and energy consumption. In this talk, Feyzkhanov…

“A Survey of Model Compression Methods,” a Presentation from Instrumental

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “A Survey of Model Compression Methods,” a Presentation from Instrumental appeared first on Edge AI and Vision Alliance.

]]>
“Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI https://www.edge-ai-vision.com/2023/09/reinventing-smart-cities-with-computer-vision-a-presentation-from-hayden-ai/ Fri, 29 Sep 2023 08:00:16 +0000 https://www.edge-ai-vision.com/?p=44128 Vaibhav Ghadiok, Co-founder and CTO of Hayden AI, presents the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. Hayden AI has developed the first AI-powered data platform for smart and safe city applications such as traffic enforcement, parking and asset management. In this talk,… “Reinventing Smart Cities with Computer …

“Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI Read More +

The post “Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI appeared first on Edge AI and Vision Alliance.

]]>
Vaibhav Ghadiok, Co-founder and CTO of Hayden AI, presents the “Reinventing Smart Cities with Computer Vision” tutorial at the May 2023 Embedded Vision Summit. Hayden AI has developed the first AI-powered data platform for smart and safe city applications such as traffic enforcement, parking and asset management. In this talk,…

“Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Reinventing Smart Cities with Computer Vision,” a Presentation from Hayden AI appeared first on Edge AI and Vision Alliance.

]]>
“Learning for 360° Vision,” a Presentation from Google https://www.edge-ai-vision.com/2023/09/learning-for-360-vision-a-presentation-from-google/ Thu, 28 Sep 2023 08:00:07 +0000 https://www.edge-ai-vision.com/?p=44121 Yu-Chuan Su, Research Scientist at Google, presents the “Learning for 360° Vision,” tutorial at the May 2023 Embedded Vision Summit. As a core building block of virtual reality (VR) and augmented reality (AR) technology, and with the rapid growth of VR and AR, 360° cameras are becoming more available and… “Learning for 360° Vision,” a …

“Learning for 360° Vision,” a Presentation from Google Read More +

The post “Learning for 360° Vision,” a Presentation from Google appeared first on Edge AI and Vision Alliance.

]]>
Yu-Chuan Su, Research Scientist at Google, presents the “Learning for 360° Vision,” tutorial at the May 2023 Embedded Vision Summit. As a core building block of virtual reality (VR) and augmented reality (AR) technology, and with the rapid growth of VR and AR, 360° cameras are becoming more available and…

“Learning for 360° Vision,” a Presentation from Google

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Learning for 360° Vision,” a Presentation from Google appeared first on Edge AI and Vision Alliance.

]]>
Edge AI and Vision Insights: September 27, 2023 Edition https://www.edge-ai-vision.com/2023/09/edge-ai-and-vision-insights-september-27-2023-edition/ Wed, 27 Sep 2023 08:01:48 +0000 https://www.edge-ai-vision.com/?p=44088 MULTIMODAL PERCEPTION Frontiers in Perceptual AI: First-person Video and Multimodal Perception First-person or “egocentric” perception requires understanding the video and multimodal data that streams from wearable cameras and other sensors. The egocentric view offers a special window into the camera wearer’s attention, goals, and interactions with people and objects in the environment, making it an …

Edge AI and Vision Insights: September 27, 2023 Edition Read More +

The post Edge AI and Vision Insights: September 27, 2023 Edition appeared first on Edge AI and Vision Alliance.

]]>

MULTIMODAL PERCEPTION

Frontiers in Perceptual AI: First-person Video and Multimodal PerceptionKristen Grauman
First-person or “egocentric” perception requires understanding the video and multimodal data that streams from wearable cameras and other sensors. The egocentric view offers a special window into the camera wearer’s attention, goals, and interactions with people and objects in the environment, making it an exciting avenue for both augmented reality and robot learning. The multimodal nature is particularly compelling, with opportunities to bring together audio, language, and vision.

Kristen Grauman, Professor at the University of Texas at Austin and Research Director at Facebook AI Research, begins her 2023 Embedded Vision Summit keynote presentation by introducing Ego4D, a massive new open-sourced multimodal egocentric dataset that captures the daily-life activity of people around the world. The result of a multi-year, multi-institution effort, Ego4D pushes the frontiers of first-person multimodal perception with a suite of research challenges ranging from activity anticipation to audio-visual conversation.

Building on this resource, Grauman presents her group’s ideas for searching egocentric videos with natural language queries (“Where did I last see X? Did I leave the garage door open?”), injecting semantics from text and speech into powerful video representations, and learning audio-visual models to understand a camera wearer’s physical environment or augment their hearing in busy places. She also touches on interesting performance-oriented challenges raised by having very long video sequences (hours!) and ideas for learning to scale retrieval and encoders.

Making Sense of Sensors: Combining Visual, Laser and Wireless Sensors to Power Occupancy Insights for Smart WorkplacesCamio
Just as humans rely on multiple senses to understand our environment, electronic systems are increasingly equipped with multiple types of perceptual sensors. Combining and integrating data from heterogeneous sensors (e.g., image, thermal, motion, LiDAR, RF) along with other types of data (e.g., card key swipes) can provide valuable insights. But combining data from heterogeneous sensors is challenging. In this presentation, Rakshit Agrawal, Vice President of Research and Development at Camio, shares his company’s experiences developing and applying practical techniques that combine heterogeneous data to enable real-world solutions within buildings. For example, occupancy insights in work spaces help to optimize use of space, improve staff engagement and productivity, enhance energy efficiency and inform maintenance scheduling decisions.

EXPERT INSIGHTS

Accelerating the Era of AI EverywhereAccelerating AI Panel
Join the panel on a journey towards the era of AI everywhere—where perceptual AI at the edge is as commonplace as LCD displays and wireless connectivity. These distinguished industry experts—Jeff Bier, panel moderator and Founder of the Edge AI and Vision Alliance, Dean Kamen, Founder of DEKA Research and Development, Lokwon Kim, CEO of DEEPX, Jason Lavene, Director of Advanced Development Engineering at Keurig Dr Pepper, and Pete Warden, Chief Executive Officer of Useful Sensors—share their insights on what it will take to unlock the full potential of this groundbreaking technology, empowering it to enhance ease of use, safety, autonomy and numerous other capabilities across a wide range of applications. The panelists delve into the challenges that early adopters of perceptual AI have faced and why some product developers may still perceive it as too complicated, expensive or unreliable—and what can be done to address these issues. Above all, they chart a path forward for the industry, aiming to “cross the chasm” and make perceptual AI an accessible and indispensable feature of everyday products.

Generative AI: How Will It Impact Edge Applications and Machine Perception?Generative AI Panel
Seemingly overnight, ChatGPT has spurred massive interest in—and excitement around—generative AI, and has become the fastest growing application in history. How will generative AI transform how we think about AI, and how we use it? What types of commercial applications are best suited for solutions powered by today’s generative AI technology? Will recent advances in generative AI change how we create and use discriminative AI models, like those used for machine perception? Will generative AI obviate the need for massive reservoirs of hand-labeled training data? Will it accelerate our ability to create systems that effortlessly meld multiple types of data, such as text, images and sound? With state-of-the-art generative models exceeding 100 billion parameters, will generative models ever be suitable for deployment at the edge? If so, for what use cases? This lively and insightful panel discussion explores these and many other questions around the rapidly evolving role of generative AI in edge and machine-perception applications. Panelists include Sally Ward-Foxton, panel moderator and Senior Reporter at EE Times, Greg Kostello, CTO and Co-Founder of Huma.AI, Vivek Pradeep, Partner Research Manager at Microsoft, Steve Teig, CEO of Perceive, and Roland Memisevic, Senior Director at Qualcomm AI Research.

UPCOMING INDUSTRY EVENTS

Embedded Vision Summit: May 21-23, 2024, Santa Clara, California

More Events

FEATURED NEWS

Cadence Accelerates On-device and Edge AI Performance and Efficiency with New Neo NPU IP and NeuroWeave SDK

AiM Future Expands Partnerships, Introduces Next-generation NeuroMosAIc Processors

Hailo and Inuitive Adopt VeriSilicon’s IP for Vision AI Processors

Upcoming Event from Avnet and Renesas Explores AI and Connectivity Solutions

Allegro DVT Launches New Generation of High-performance Multi-format Video Encoder IP for 4K/8K Video Resolutions

More News

EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE

Qualcomm Cognitive ISP (Best Camera or Sensor)Qualcomm
Qualcomm’s Cognitive ISP is the 2023 Edge AI and Vision Product of the Year Award winner in the Cameras and Sensors category. The Cognitive ISP (within the Snapdragon 8 Gen 2 Mobile Platform) is the only ISP for smartphones that can apply the AI photo-editing technique called “Semantic Segmentation” in real-time. Semantic Segmentation is like “Photoshop layers,” but handled completely within the ISP. It will turn great photos into spectacular photos. Since it’s real-time, it’s running while you’re capturing photos and videos – or even before. You can see objects in the viewfinder being enhanced as you’re getting ready to shoot. A real-time Segmentation Filter is groundbreaking. This means the camera is truly contextually aware of what it’s seeing. Qualcomm achieved this by building a physical bridge between the ISP and the DSP – it’s called “Hexagon Direct Link”. The DSP runs Semantic Segmentation neural networks in real-time. Thanks to Hexagon Direct Link, the DSP and the ISP can operate simultaneously. The ISP captures images and the DSP assigns context to every image in real-time.

Please see here for more information on Qualcomm’s Cognitive ISP. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts.

 

The post Edge AI and Vision Insights: September 27, 2023 Edition appeared first on Edge AI and Vision Alliance.

]]>
“90% of Tech Start-Ups Fail. What the Other 10% Know,” a Presentation from Connected Vision Advisors https://www.edge-ai-vision.com/2023/09/90-of-tech-start-ups-fail-what-the-other-10-know-a-presentation-from-connected-vision-advisors/ Wed, 27 Sep 2023 08:00:22 +0000 https://www.edge-ai-vision.com/?p=44110 Simon Morris, Executive Advisor at Connected Vision Advisors, presents the “90% of Tech Start-Ups Fail. What Do the Other 10% Know?” tutorial at the May 2023 Embedded Vision Summit. Morris is fortunate to have led three tech start-ups with three successful exits. He received a lot of advice along the… “90% of Tech Start-Ups Fail. …

“90% of Tech Start-Ups Fail. What the Other 10% Know,” a Presentation from Connected Vision Advisors Read More +

The post “90% of Tech Start-Ups Fail. What the Other 10% Know,” a Presentation from Connected Vision Advisors appeared first on Edge AI and Vision Alliance.

]]>
Simon Morris, Executive Advisor at Connected Vision Advisors, presents the “90% of Tech Start-Ups Fail. What Do the Other 10% Know?” tutorial at the May 2023 Embedded Vision Summit. Morris is fortunate to have led three tech start-ups with three successful exits. He received a lot of advice along the…

“90% of Tech Start-Ups Fail. What the Other 10% Know,” a Presentation from Connected Vision Advisors

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “90% of Tech Start-Ups Fail. What the Other 10% Know,” a Presentation from Connected Vision Advisors appeared first on Edge AI and Vision Alliance.

]]>