Perceive - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/perceive/ Designing machines that perceive and understand. Wed, 13 Sep 2023 20:20:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Perceive - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/perceive/ 32 32 “Generative AI: How Will It Impact Edge Applications and Machine Perception?,” An Embedded Vision Summit Expert Panel Discussion https://www.edge-ai-vision.com/2023/09/generative-ai-how-will-it-impact-edge-applications-and-machine-perception-an-embedded-vision-summit-expert-panel-discussion/ Thu, 14 Sep 2023 08:00:28 +0000 https://www.edge-ai-vision.com/?p=43606 Sally Ward-Foxton, Senior Reporter at EE Times, moderates the “Generative AI: How Will It Impact Edge Applications and Machine Perception?” Expert Panel at the May 2023 Embedded Vision Summit. Other panelists include Greg Kostello, CTO and Co-Founder of Huma.AI, Vivek Pradeep, Partner Research Manager at Microsoft, Steve Teig, CEO of Perceive, and Roland Memisevic, Senior …

“Generative AI: How Will It Impact Edge Applications and Machine Perception?,” An Embedded Vision Summit Expert Panel Discussion Read More +

The post “Generative AI: How Will It Impact Edge Applications and Machine Perception?,” An Embedded Vision Summit Expert Panel Discussion appeared first on Edge AI and Vision Alliance.

]]>
Sally Ward-Foxton, Senior Reporter at EE Times, moderates the “Generative AI: How Will It Impact Edge Applications and Machine Perception?” Expert Panel at the May 2023 Embedded Vision Summit. Other panelists include Greg Kostello, CTO and Co-Founder of Huma.AI, Vivek Pradeep, Partner Research Manager at Microsoft, Steve Teig, CEO of Perceive, and Roland Memisevic, Senior Director at Qualcomm AI Research.

Seemingly overnight, ChatGPT has spurred massive interest in—and excitement around—generative AI, and has become the fastest growing application in history. How will generative AI transform how we think about AI, and how we use it? What types of commercial applications are best suited for solutions powered by today’s generative AI technology?

Will recent advances in generative AI change how we create and use discriminative AI models, like those used for machine perception? Will generative AI obviate the need for massive reservoirs of hand-labeled training data? Will it accelerate our ability to create systems that effortlessly meld multiple types of data, such as text, images and sound?

With state-of-the-art generative models exceeding 100 billion parameters, will generative models ever be suitable for deployment at the edge? If so, for what use cases? This lively and insightful panel discussion explores these and many other questions around the rapidly-evolving role of generative AI in edge and machine-perception applications.

The post “Generative AI: How Will It Impact Edge Applications and Machine Perception?,” An Embedded Vision Summit Expert Panel Discussion appeared first on Edge AI and Vision Alliance.

]]>
“Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive https://www.edge-ai-vision.com/2023/07/making-gans-much-better-or-if-at-first-you-dont-succeed-try-try-a-gan-a-presentation-from-perceive/ Thu, 20 Jul 2023 08:00:55 +0000 https://www.edge-ai-vision.com/?p=42603 Steve Teig, CEO of Perceive, presents the “Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN” tutorial at the May 2023 Embedded Vision Summit. Generative adversarial networks, or GANs, are widely used to create amazing “fake” images and realistic, synthetic training data. And yet,… “Making GANs Much Better, or …

“Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive Read More +

The post “Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive appeared first on Edge AI and Vision Alliance.

]]>
Steve Teig, CEO of Perceive, presents the “Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN” tutorial at the May 2023 Embedded Vision Summit. Generative adversarial networks, or GANs, are widely used to create amazing “fake” images and realistic, synthetic training data. And yet,…

“Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Making GANs Much Better, or If at First You Don’t Succeed, Try, Try a GAN,” a Presentation from Perceive appeared first on Edge AI and Vision Alliance.

]]>
Perceive Demonstration of Image Classification and Pose Detection using Ergo AI Processors https://www.edge-ai-vision.com/2023/03/perceive-demonstration-of-image-classification-and-pose-detection-using-ergo-ai-processors/ Thu, 30 Mar 2023 16:43:10 +0000 https://www.edge-ai-vision.com/?p=41182 David McIntyre, Vice President of Marketing for Perceive, demonstrates the company’s latest edge AI and vision technologies and products at the March 2023 Edge AI and Vision Innovation Forum. Specifically, McIntyre demonstrates the unique combination of capability and power-efficiency delivered by the Perceive Ergo and Ergo 2 AI processors. McIntyre shows the Ergo chip running …

Perceive Demonstration of Image Classification and Pose Detection using Ergo AI Processors Read More +

The post Perceive Demonstration of Image Classification and Pose Detection using Ergo AI Processors appeared first on Edge AI and Vision Alliance.

]]>
David McIntyre, Vice President of Marketing for Perceive, demonstrates the company’s latest edge AI and vision technologies and products at the March 2023 Edge AI and Vision Innovation Forum. Specifically, McIntyre demonstrates the unique combination of capability and power-efficiency delivered by the Perceive Ergo and Ergo 2 AI processors.

McIntyre shows the Ergo chip running ResNet-50 for image classification, at 30 fps using approximately 10 mW of compute power, and the Ergo 2 processor running pose detection, using a combination of two neural networks running concurrently and drawing approximately 100 mW of compute power. The Ergo 2 processor is Perceive’s newest product, designed to handle large neural networks including transformer networks within tight power constraints, and making it possible to integrate increasingly sophisticated AI-based features into a variety of edge devices for consumer and enterprise applications.

The post Perceive Demonstration of Image Classification and Pose Detection using Ergo AI Processors appeared first on Edge AI and Vision Alliance.

]]>
Perceive Launches Ergo 2 AI Processor with Unmatched Combination of Performance and Power Efficiency https://www.edge-ai-vision.com/2023/01/perceive-launches-ergo-2-ai-processor-with-unmatched-combination-of-performance-and-power-efficiency/ Tue, 10 Jan 2023 00:03:04 +0000 https://www.edge-ai-vision.com/?p=40353 Ergo 2 enables even larger, more complex neural networks inside edge devices, including transformer networks for language and imaging, while drawing less than 100 milliwatts of compute power January 3, 2023 – Today, Perceive® unveiled a faster, more powerful addition to the Ergo® product line. The new Ergo 2 AI processor provides the performance needed …

Perceive Launches Ergo 2 AI Processor with Unmatched Combination of Performance and Power Efficiency Read More +

The post Perceive Launches Ergo 2 AI Processor with Unmatched Combination of Performance and Power Efficiency appeared first on Edge AI and Vision Alliance.

]]>
Ergo 2 enables even larger, more complex neural networks inside edge devices, including transformer networks for language and imaging, while drawing less than 100 milliwatts of compute power

January 3, 2023 – Today, Perceive® unveiled a faster, more powerful addition to the Ergo® product line. The new Ergo 2 AI processor provides the performance needed for more complex use cases, including those requiring transformer models, larger neural networks, multiple networks operating simultaneously, and multimodal inputs, while maintaining industry-leading power efficiency.

Steve Teig, Perceive founder and CEO, explains the market opportunity for the newest Ergo chip.

“With the new Ergo 2 processor, we have expanded our ability to provide device manufacturers with a path to building their most ambitious products,” said Steve. “These can include transformer models for language or visual processing, video processing at higher frame rates, and even combining multiple, large neural networks in a single application.”

Faster, smarter, and still tiny

Ergo 2 runs up to four times faster than Perceive’s first-generation Ergo chip and delivers significantly more processing power than typical chips designed for tiny ML. Now, product developers can leverage advanced neural networks such as YOLOv5, RoBERTa, GANs, and U-Nets to deliver accurate results quickly. All Ergo 2 processing is done on-chip and without external memory for better power efficiency, privacy, and security. Ergo 2 silicon achieves:

  • 1,106 inferences per second running MobileNet V2
  • 979 inferences per second running ResNet-50
  • 115 inferences per second running YoloV5-S

To provide the performance enhancements needed to run these larger networks, the Ergo 2 chip has been designed with a pipelined architecture and unified memory, which improve its flexibility and overall operating efficiency. As a result, Ergo 2 can support higher-resolution sensors and a wider range of applications, including:

  • Language processing applications such as speech-to-text and sentence completion
  • Audio applications such as acoustic echo cancellation and richer audio event detection
  • Demanding video processing tasks such as video super resolution and pose detection.

With a 7 mm by 7 mm footprint, the Ergo 2 processor is manufactured by GlobalFoundries using the 22FDX™ platform and is designed to operate without requiring external DRAM. Its low power draw also means it doesn’t need cooling.

The chip can run multiple heterogeneous networks simultaneously, enabling intelligent video and audio features for devices such as enterprise-grade cameras for security, access control, thermal imaging, or retail video analytics; for industrial use cases including visual inspection; or for integration into consumer products such as laptops, tablets, and advanced wearables.

Ready for integration with development platform and model zoo

To support building solutions for each product developer’s needs, Perceive provides a comprehensive suite of tools, including an ML toolchain that massively compresses and optimizes neural networks to run on Ergo 2, a software development kit for building embedded applications using Ergo 2, and a model zoo with example networks and applications to accelerate the development process.

About Perceive Corporation

Perceive makes devices smarter. The company develops breakthrough neural network inference solutions that push the performance-accuracy-power envelope, while protecting the security and privacy of consumers. By bringing datacenter-class accuracy and performance to the edge, Perceive enables device makers to deliver smarter products that understand their environment and respond intelligently. Founded in 2018, Perceive is a majority-owned subsidiary of Xperi Inc. and is based in San Jose, California. For more information, visit https://www.perceive.io.

The post Perceive Launches Ergo 2 AI Processor with Unmatched Combination of Performance and Power Efficiency appeared first on Edge AI and Vision Alliance.

]]>
Free Webinar Explores Compression Techniques For Running Deep Learning Networks on Constrained Hardware https://www.edge-ai-vision.com/2022/08/free-webinar-explores-compression-techniques-for-running-deep-learning-networks-on-constrained-hardware/ Thu, 18 Aug 2022 21:09:02 +0000 https://www.edge-ai-vision.com/?p=37646 On November 10, 2022 at 9 am PT (noon ET), Steve Teig, Founder and CEO of Perceive, will present the free one-hour webinar “Putting Activations on a Diet – Or Why Watching Your Weights Is Not Enough,” organized by the Edge AI and Vision Alliance. Here’s the description, from the event registration page: To reduce …

Free Webinar Explores Compression Techniques For Running Deep Learning Networks on Constrained Hardware Read More +

The post Free Webinar Explores Compression Techniques For Running Deep Learning Networks on Constrained Hardware appeared first on Edge AI and Vision Alliance.

]]>
On November 10, 2022 at 9 am PT (noon ET), Steve Teig, Founder and CEO of Perceive, will present the free one-hour webinar “Putting Activations on a Diet – Or Why Watching Your Weights Is Not Enough,” organized by the Edge AI and Vision Alliance. Here’s the description, from the event registration page:

To reduce the memory requirements of neural networks, researchers have proposed numerous heuristics for compressing weights. Lower precision, sparsity, weight sharing and various other schemes shrink the memory needed by the neural network’s weights or program. Unfortunately, during network execution, memory use is usually dominated by activations–the data flowing through the network–rather than weights.

Although lower precision can reduce activation memory somewhat, more extreme steps are required in order to enable large networks to run efficiently with small memory footprints. Fortunately, the underlying information content of activations is often modest, so novel compression strategies can dramatically widen the range of networks executable on constrained hardware. In this webinar, Teig will introduce some new strategies for compressing activations, sharply reducing their memory footprint. A question-and-answer session will follow the presentation.

To register for this free webinar, please see the event page. For more information, please email webinars@edge-ai-vision.com.

The post Free Webinar Explores Compression Techniques For Running Deep Learning Networks on Constrained Hardware appeared first on Edge AI and Vision Alliance.

]]>
“Are Neuromorphic Vision Technologies Ready for Commercial Use?,” An Embedded Vision Summit Expert Panel Discussion https://www.edge-ai-vision.com/2022/08/are-neuromorphic-vision-technologies-ready-for-commercial-use-an-embedded-vision-summit-expert-panel-discussion/ Wed, 10 Aug 2022 08:00:56 +0000 https://www.edge-ai-vision.com/?p=37504 Sally Ward-Foxton, European Correspondent for EE Times, moderates the “Are Neuromorphic Vision Technologies Ready for Commercial Use?” Expert Panel at the May 2022 Embedded Vision Summit. Other panelists include Garrick Orchard, Research Scientist at Intel Labs, James Marshall, Chief Scientific Officer at Opteran, Ryad Benosman, Professor at the University of Pittsburgh and Adjunct Professor at …

“Are Neuromorphic Vision Technologies Ready for Commercial Use?,” An Embedded Vision Summit Expert Panel Discussion Read More +

The post “Are Neuromorphic Vision Technologies Ready for Commercial Use?,” An Embedded Vision Summit Expert Panel Discussion appeared first on Edge AI and Vision Alliance.

]]>
Sally Ward-Foxton, European Correspondent for EE Times, moderates the “Are Neuromorphic Vision Technologies Ready for Commercial Use?” Expert Panel at the May 2022 Embedded Vision Summit. Other panelists include Garrick Orchard, Research Scientist at Intel Labs, James Marshall, Chief Scientific Officer at Opteran, Ryad Benosman, Professor at the University of Pittsburgh and Adjunct Professor at the CMU Robotics Institute, and Steve Teig, Founder and CEO of Perceive.

Neuromorphic vision–vision systems inspired by biological systems–promise to save power and improve latency in a variety of edge and endpoint applications. After many years of research and development, are these technologies ready to move out of the lab and into today’s electronic systems and products? What challenges do neuromorphic vision sensors and neuromorphic computing chips face when entering a market saturated by classical and deep learning-driven computer vision systems, and how can these challenges be overcome?

Are both neuromorphic sensors and neuromorphic processors required for success, and what is the right hardware for today’s systems? Are spiking neural networks required and are they ready for commercial deployment? What sort of industry ecosystem will be required to enable these technologies to become widely used? This lively panel discussion provides perspectives on these and other topics from a panel of seasoned experts who are working at the leading edge of neuromorphic vision development, tools and techniques.

The post “Are Neuromorphic Vision Technologies Ready for Commercial Use?,” An Embedded Vision Summit Expert Panel Discussion appeared first on Edge AI and Vision Alliance.

]]>
“Facing Up to Bias,” a Presentation from Perceive https://www.edge-ai-vision.com/2021/08/facing-up-to-bias-a-presentation-from-perceive/ Mon, 30 Aug 2021 08:00:07 +0000 https://www.edge-ai-vision.com/?p=32451 Steve Teig, CEO of Perceive, presents the “Facing Up to Bias” tutorial at the May 2021 Embedded Vision Summit. Today’s face recognition networks identify white men correctly more often than white women or non-white people. The use of these models can manifest racism, sexism, and other troubling forms of discrimination. There are also publications suggesting …

“Facing Up to Bias,” a Presentation from Perceive Read More +

The post “Facing Up to Bias,” a Presentation from Perceive appeared first on Edge AI and Vision Alliance.

]]>
Steve Teig, CEO of Perceive, presents the “Facing Up to Bias” tutorial at the May 2021 Embedded Vision Summit.

Today’s face recognition networks identify white men correctly more often than white women or non-white people. The use of these models can manifest racism, sexism, and other troubling forms of discrimination. There are also publications suggesting that compressed models have greater bias than uncompressed ones.

Remarkably, poor statistical reasoning bears as much responsibility for the underlying biases as social pathology does. Further, compression per se is not the source of bias; it just magnifies bias already produced by mainstream training methodologies. By illuminating the sources of statistical bias, we can train models in a more principled way – not just throwing more data at them – to be more discriminating, rather than discriminatory.

See here for a PDF of the slides.

The post “Facing Up to Bias,” a Presentation from Perceive appeared first on Edge AI and Vision Alliance.

]]>
“TinyML Isn’t Thinking Big Enough,” a Presentation from Perceive https://www.edge-ai-vision.com/2021/08/tinyml-isnt-thinking-big-enough-a-presentation-from-perceive/ Tue, 10 Aug 2021 08:00:57 +0000 https://www.edge-ai-vision.com/?p=32200 Steve Teig, CEO of Perceive, presents the “TinyML Isn’t Thinking Big Enough” tutorial at the May 2021 Embedded Vision Summit. Today, TinyML focuses primarily on shoehorning neural networks onto microcontrollers or small CPUs but misses the opportunity to transform all of ML because of two unfortunate assumptions: first, that tiny models must make significant performance …

“TinyML Isn’t Thinking Big Enough,” a Presentation from Perceive Read More +

The post “TinyML Isn’t Thinking Big Enough,” a Presentation from Perceive appeared first on Edge AI and Vision Alliance.

]]>
Steve Teig, CEO of Perceive, presents the “TinyML Isn’t Thinking Big Enough” tutorial at the May 2021 Embedded Vision Summit.

Today, TinyML focuses primarily on shoehorning neural networks onto microcontrollers or small CPUs but misses the opportunity to transform all of ML because of two unfortunate assumptions: first, that tiny models must make significant performance and accuracy compromises to fit inside edge devices, and second, that tiny models should run on CPUs or microcontrollers.

Regarding the first assumption, information-theoretic considerations would suggest that principled compression (vs., say, just replacing 32-bit weights with 8-bit weights) should make models more accurate, not less. For the second assumption, CPUs are saddled with an intrinsically power-inefficient memory model and mostly serial computation, but the evident parallelism of neural networks naturally leads to high-performance, power-efficient, massively parallel inference hardware. By upending these assumptions, TinyML can revolutionize all of ML–and not just inside microcontrollers.

See here for a PDF of the slides.

The post “TinyML Isn’t Thinking Big Enough,” a Presentation from Perceive appeared first on Edge AI and Vision Alliance.

]]>
“Accuracy: Beware of Red Herrings and Black Swans,” a Presentation from Perceive https://www.edge-ai-vision.com/2020/12/accuracy-beware-of-red-herrings-and-black-swans-a-presentation-from-perceive/ Tue, 15 Dec 2020 09:00:44 +0000 https://www.edge-ai-vision.com/?p=28566 Steve Teig, CEO of Perceive, presents the “Accuracy: Beware of Red Herrings and Black Swans” tutorial at the September 2020 Embedded Vision Summit. Machine learning aims to construct models that are predictive: accurate even on data not used during training. But how should we assess accuracy? (Hint: simply computing the average error on a pre-determined …

“Accuracy: Beware of Red Herrings and Black Swans,” a Presentation from Perceive Read More +

The post “Accuracy: Beware of Red Herrings and Black Swans,” a Presentation from Perceive appeared first on Edge AI and Vision Alliance.

]]>
Steve Teig, CEO of Perceive, presents the “Accuracy: Beware of Red Herrings and Black Swans” tutorial at the September 2020 Embedded Vision Summit.

Machine learning aims to construct models that are predictive: accurate even on data not used during training. But how should we assess accuracy? (Hint: simply computing the average error on a pre-determined test set, while nearly universal, is frequently a bad strategy.) How can we avoid catastrophic errors due to black swans—rare, highly atypical events? Consider that, at 30 frames per second, video presents so many events that even “highly atypical” ones occur every day! How can we avoid overreacting to red herrings—coincidences in the training data that are irrelevant? After all, a model’s entire knowledge of the world is the data used in training.

To build more trustworthy models, we must re-examine how to measure accuracy and how best to achieve it. This talk challenges some widely held assumptions and offers some novel steps forward, occasionally livened by colorful, zoological metaphors.

See here for a PDF of the slides.

The post “Accuracy: Beware of Red Herrings and Black Swans,” a Presentation from Perceive appeared first on Edge AI and Vision Alliance.

]]>
“The Opportunities and Challenges in Realizing the Potential for AI at the Edge: An Update from the Front Lines,” An Embedded Vision Summit Expert Panel Discussion https://www.edge-ai-vision.com/2020/12/the-opportunities-and-challenges-in-realizing-the-potential-for-ai-at-the-edge-an-update-from-the-front-lines-an-embedded-vision-summit-expert-panel-discussion/ Fri, 11 Dec 2020 09:00:01 +0000 https://www.edge-ai-vision.com/?p=28390 Samir Kumar, Managing Director at Microsoft M12, moderates the “Opportunities and Challenges in Realizing the Potential for AI at the Edge: An Update from the Front Lines” Expert Panel at the September 2020 Embedded Vision Summit. Other panelists include Pete Warden, Staff Research Engineer at Google; Stevie Bathiche, Technical Fellow at Microsoft; Steve Teig, CEO …

“The Opportunities and Challenges in Realizing the Potential for AI at the Edge: An Update from the Front Lines,” An Embedded Vision Summit Expert Panel Discussion Read More +

The post “The Opportunities and Challenges in Realizing the Potential for AI at the Edge: An Update from the Front Lines,” An Embedded Vision Summit Expert Panel Discussion appeared first on Edge AI and Vision Alliance.

]]>
Samir Kumar, Managing Director at Microsoft M12, moderates the “Opportunities and Challenges in Realizing the Potential for AI at the Edge: An Update from the Front Lines” Expert Panel at the September 2020 Embedded Vision Summit. Other panelists include Pete Warden, Staff Research Engineer at Google; Stevie Bathiche, Technical Fellow at Microsoft; Steve Teig, CEO at Perceive; and Luis Ceze, Professor at the University of Washington and Co-founder and CEO of OctoML.

The opportunity for edge AI solutions at scale is massive—and is expected to eclipse cloud-based approaches in the next few years. But to realize this potential, we must find ways to simplify and democratize the development and deployment of edge AI systems. What are the most critical gaps that must be filled to streamline edge AI development and deployment? What are the most promising recent innovations in this space? What should “DevOps for edge AI” look like? In this panel discussion, you’ll hear perspectives on these and related questions from leading experts who are enabling the next generation of edge AI solutions.

The post “The Opportunities and Challenges in Realizing the Potential for AI at the Edge: An Update from the Front Lines,” An Embedded Vision Summit Expert Panel Discussion appeared first on Edge AI and Vision Alliance.

]]>