Flex Logix - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/flex-logix/ Designing machines that perceive and understand. Mon, 25 Sep 2023 14:54:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.edge-ai-vision.com/wp-content/uploads/2019/12/cropped-logo_colourplus-32x32.png Flex Logix - Edge AI and Vision Alliance https://www.edge-ai-vision.com/category/provider/flex-logix/ 32 32 Flex Logix Expands Upon Industry-leading Embedded FPGA Customer Base https://www.edge-ai-vision.com/2023/09/flex-logix-expands-upon-industry-leading-embedded-fpga-customer-base/ Mon, 25 Sep 2023 14:54:43 +0000 https://www.edge-ai-vision.com/?p=44091 Company’s award-winning EFLX® eFPGA technology currently used by 20 customers for 40 unique chips MOUNTAIN VIEW, Calif., Sept. 25, 2023 /PRNewswire/ — Flex Logix® Technologies, Inc., the leading supplier of embedded FPGA (eFPGA) IP, announced today that it now has 20 worldwide customers that have licensed the company’s advanced EFLX eFPGA technology architecture for 40 …

Flex Logix Expands Upon Industry-leading Embedded FPGA Customer Base Read More +

The post Flex Logix Expands Upon Industry-leading Embedded FPGA Customer Base appeared first on Edge AI and Vision Alliance.

]]>
Company’s award-winning EFLX® eFPGA technology currently used by 20 customers for 40 unique chips

MOUNTAIN VIEW, Calif., Sept. 25, 2023 /PRNewswire/ — Flex Logix® Technologies, Inc., the leading supplier of embedded FPGA (eFPGA) IP, announced today that it now has 20 worldwide customers that have licensed the company’s advanced EFLX eFPGA technology architecture for 40 chips.

More than 25 chips have been successfully fabricated in silicon using EFLX eFPGA with many more in design. Leading (disclosed) customers today include Renesas, Datung and Boeing.

“Our customers are worldwide – in the U.S., Japan, Europe, Israel, and China,” said Geoff Tate, Flex Logix co-founder and CEO. “Flex Logix’s partners are leveraging our eFPGA technology in a wide range of applications, using multiple foundries on nodes from 180nm down to 7nm, with customer evaluations ongoing for 5nm and 3nm as well.”

“FPGAs today are mainstream, used in high volume across many applications,” said Andy Jaros, VP of Sales at Flex Logix. “Our customers take advantage of the unique benefits of embedded FPGA technology to cut the size, cost, and power of FPGAs through integration into their SoCs or processors. Customers who have never used FPGAs are now aggressively adding eFPGA to give their chips the flexibility to adapt to changing standards, changing algorithms, and to enable customers to customize the chips for their proprietary solutions.”

About Flex Logix

Flex Logix is a reconfigurable computing company providing leading edge eFPGA and AI Inference technologies for semiconductor and systems companies. Flex Logix eFPGA enables volume FPGA users to integrate the FPGA into their companion SoC, resulting in a 5-10x reduction in the cost and power of the FPGA and increasing compute density which is critical for communications, networking, data centers, microcontrollers and others. Its scalable AI inference is the most efficient, providing much higher inference throughput per square millimeter and per watt. Flex Logix supports process nodes from 180nm to 7nm, with 5nm and 3nm in development. Flex Logix is headquartered in Mountain View, California and has an office in Austin, Texas. For more information, visit https://flex-logix.com.

For general information on InferX and EFLX product lines, visit our website at this link. For more information under NDA, qualified customers can contact us at this link.

 

The post Flex Logix Expands Upon Industry-leading Embedded FPGA Customer Base appeared first on Edge AI and Vision Alliance.

]]>
Flex Logix Introduction to Its Latest-generation InferX IP for AI Inference at the Edge https://www.edge-ai-vision.com/2023/08/flex-logix-introduction-to-its-latest-generation-inferx-ip-for-ai-inference-at-the-edge/ Wed, 30 Aug 2023 08:02:34 +0000 https://www.edge-ai-vision.com/?p=43428 Jeremy Roberson, Technical Director and Software Architect for AI and Machine Learning at Flex Logix, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Roberson introduces the company’s latest-generation InferX IP for AI inference at the edge.

The post Flex Logix Introduction to Its Latest-generation InferX IP for AI Inference at the Edge appeared first on Edge AI and Vision Alliance.

]]>
Jeremy Roberson, Technical Director and Software Architect for AI and Machine Learning at Flex Logix, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Roberson introduces the company’s latest-generation InferX IP for AI inference at the edge.

The post Flex Logix Introduction to Its Latest-generation InferX IP for AI Inference at the Edge appeared first on Edge AI and Vision Alliance.

]]>
Flex Logix Demonstration of Its InferX IP for AI Inference Implementing Object Detection at the Edge https://www.edge-ai-vision.com/2023/08/flex-logix-demonstration-of-its-inferx-ip-for-ai-inference-implementing-object-detection-at-the-edge/ Wed, 30 Aug 2023 08:01:35 +0000 https://www.edge-ai-vision.com/?p=43424 Jeremy Roberson, Technical Director and Software Architect for AI and Machine Learning at Flex Logix, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Roberson demonstrates the company’s InferX IP for AI inference at the edge, implementing object detection.

The post Flex Logix Demonstration of Its InferX IP for AI Inference Implementing Object Detection at the Edge appeared first on Edge AI and Vision Alliance.

]]>
Jeremy Roberson, Technical Director and Software Architect for AI and Machine Learning at Flex Logix, demonstrates the company’s latest edge AI and vision technologies and products at the 2023 Embedded Vision Summit. Specifically, Roberson demonstrates the company’s InferX IP for AI inference at the edge, implementing object detection.

The post Flex Logix Demonstration of Its InferX IP for AI Inference Implementing Object Detection at the Edge appeared first on Edge AI and Vision Alliance.

]]>
“Challenges in Architecting Vision Inference Systems for Transformer Models,” a Presentation from Flex Logix https://www.edge-ai-vision.com/2023/06/challenges-in-architecting-vision-inference-systems-for-transformer-models-a-presentation-from-flex-logix/ Tue, 27 Jun 2023 08:00:32 +0000 https://www.edge-ai-vision.com/?p=42379 Cheng Wang, Co-founder and CTO of Flex Logix, presents the “Challenges in Architecting Vision Inference Systems for Transformer Models” tutorial at the May 2023 Embedded Vision Summit. When used correctly, transformer neural networks can deliver greater accuracy for less computation. But transformers are challenging for existing AI engine architectures because… “Challenges in Architecting Vision Inference …

“Challenges in Architecting Vision Inference Systems for Transformer Models,” a Presentation from Flex Logix Read More +

The post “Challenges in Architecting Vision Inference Systems for Transformer Models,” a Presentation from Flex Logix appeared first on Edge AI and Vision Alliance.

]]>
Cheng Wang, Co-founder and CTO of Flex Logix, presents the “Challenges in Architecting Vision Inference Systems for Transformer Models” tutorial at the May 2023 Embedded Vision Summit. When used correctly, transformer neural networks can deliver greater accuracy for less computation. But transformers are challenging for existing AI engine architectures because…

“Challenges in Architecting Vision Inference Systems for Transformer Models,” a Presentation from Flex Logix

Register or sign in to access this content.

Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.

The post “Challenges in Architecting Vision Inference Systems for Transformer Models,” a Presentation from Flex Logix appeared first on Edge AI and Vision Alliance.

]]>
Flex Logix Announces InferX High Performance IP for DSP and AI Inference https://www.edge-ai-vision.com/2023/04/flex-logix-announces-inferx-high-performance-ip-for-dsp-and-ai-inference/ Wed, 26 Apr 2023 17:01:34 +0000 https://www.edge-ai-vision.com/?p=41570 InferX delivers the performance of the fastest FPGA or GPU in an SoC at 1/10th the power and cost while keeping all of the reconfigurability MOUNTAIN VIEW, Calif., April 24, 2023 /PRNewswire/ — Flex Logix® Technologies, Inc., a leading innovator in DSP & AI inference IP and the leading supplier of eFPGA IP, announced today …

Flex Logix Announces InferX High Performance IP for DSP and AI Inference Read More +

The post Flex Logix Announces InferX High Performance IP for DSP and AI Inference appeared first on Edge AI and Vision Alliance.

]]>
InferX delivers the performance of the fastest FPGA or GPU in an SoC at 1/10th the power and cost while keeping all of the reconfigurability

MOUNTAIN VIEW, Calif., April 24, 2023 /PRNewswire/ — Flex Logix® Technologies, Inc., a leading innovator in DSP & AI inference IP and the leading supplier of eFPGA IP, announced today the availability of InferXTM IP & software for DSP and AI inference. InferX joins EFLX® eFPGA as Flex Logix’s second IP offering. It can be used by device manufacturers and systems companies that want the performance of a DSP-FPGA or a AI-GPU in their SoC, but at a fraction of the cost and power. The company’s EFLX eFPGA product line has already been proven in dozens of chips with many more in design from 180nm to 7nm with 5nm in development.

“By integrating InferX into an SoC, customers not only maintain the performance and programmability of an expensive and power-hungry FPGA or GPU, but they also benefit from much lower power consumption and cost,” said Geoff Tate, Founder and CEO of Flex Logix. “This is a significant advantage to systems customers that are designing their own ASICs, as well as chip companies that have traditionally had the DSP-FPGA or AI-GPU sitting next to their chip and can now integrate it to get more revenue and save their customer power and cost. InferX is 80% hard-wired, but 100% reconfigurable.”

The end user benefit is more powerful DSP and AI in smaller systems, lower power and lower cost. With InferX AI, users can process megapixel images with much more accurate models like Yolov5s6 and Yolov5L6 to detect images at smaller sizes/greater distances than is affordable now.

The InferX Advantage

InferX DSP is InferX hardware combined with Softlogic for DSP operations, which Flex Logix provides for operations such as FFT that is on-the-fly switchable between sizes (e.g. 1K to 4K to 2K); FIR filters of any number of taps; Complex Matrix Inversions 16×16 or 32×32 or other size; and many more. InferX DSP streams Gigasamples/second, can run multiple DSP operations, and DSP operations can be chained. DSP is done on Real/Complex INT16 with 32-bit accumulation for very high accuracy. With InferX DSP you can integrate DSP performance that is as fast or faster than the leading FPGA at 1/10th of the cost and power, while keeping all of the flexibility to reconfigure almost instantly. One example is InferX DSP with <50 square millimeters of silicon in N5 can do Complex INT16 FFTs at 68 Gigasamples/second and switch instantly between FFT sizes from 256 to 8K points. This is faster than the best FPGA available today at a fraction of the cost, power and size.

InferX AI is InferX hardware combined with the Inference Compiler for AI Inference. Inference Compiler takes in a customer’s neural network model in Pytorch, Onnx or TFLite formats, quantizes the model with high accuracy, compiles the graph for high utilization and generates the run time code that executes on the InferX hardware. A simple, easy-to-use API is provided to control the InferX IP. With InferX AI, customers can integrate AI Inference performance that is as fast or faster than the leading edge AI modules at 1/10th of the cost and power, while keeping all of the flexibility and the ability to run multiple models or change models on the fly. InferX AI is optimized for megapixel batch=1 operations, and the inference compiler is available for evaluation. As an example, with about 15 square millimeters of silicon in N7, InferX AI can run Yolov5s at 175 Inferences/second: this is 40% faster than the fastest edge AI module, Orin AGX 60W.

InferX technology is proven in 16nm and production qualified and will be available in the most popular FinFet nodes.

InferX hardware is also scalable. Its building block is a compute tile that can be arrayed for more throughput. For example, a 4 tile array is 4x the performance of a 1 tile array. The InferX array with the performance the customer wants is delivered with an AXI bus interface for easy integration in their SoC.

For general information on InferX and EFLX product lines, visit our website at this link. For more information under NDA, qualified customers can contact us at this link.

About Flex Logix

Flex Logix is a reconfigurable computing company providing leading edge eFPGA and AI Inference technologies for semiconductor and systems companies. Flex Logix eFPGA enables volume FPGA users to integrate the FPGA into their companion SoC, resulting in a 5-10x reduction in the cost and power of the FPGA and increasing compute density which is critical for communications, networking, data centers, microcontrollers and others. Its scalable AI inference is the most efficient, providing much higher inference throughput per square millimeter and per watt. Flex Logix supports process nodes from 180nm to 7nm, with 5nm in development; and can support other nodes on short notice. Flex Logix is headquartered in Mountain View, California and has an office in Austin, Texas. For more information, visit https://flex-logix.com.

The post Flex Logix Announces InferX High Performance IP for DSP and AI Inference appeared first on Edge AI and Vision Alliance.

]]>
Flex Logix Demonstration of the InferX X1M M.2 Card Running Multi-camera Object Detection https://www.edge-ai-vision.com/2022/09/flex-logix-demonstration-of-the-inferx-x1m-m-2-card-running-multi-camera-object-detection/ Wed, 28 Sep 2022 08:01:57 +0000 https://www.edge-ai-vision.com/?p=38405 Salvador Alvarez, director of solutions architecture at Flex Logix Technologies, demonstrates the company’s latest edge AI and vision technologies and products at the 2022 Embedded Vision Summit. Specifically, Alvarez demonstrates the InferX X1M AI accelerator running a real-time multi-camera object detection algorithm in a small form factor NUC system. The X1M accelerator is implemented on …

Flex Logix Demonstration of the InferX X1M M.2 Card Running Multi-camera Object Detection Read More +

The post Flex Logix Demonstration of the InferX X1M M.2 Card Running Multi-camera Object Detection appeared first on Edge AI and Vision Alliance.

]]>
Salvador Alvarez, director of solutions architecture at Flex Logix Technologies, demonstrates the company’s latest edge AI and vision technologies and products at the 2022 Embedded Vision Summit. Specifically, Alvarez demonstrates the InferX X1M AI accelerator running a real-time multi-camera object detection algorithm in a small form factor NUC system. The X1M accelerator is implemented on a M.2 card form factor and is under 10W of total power dissipation.

The post Flex Logix Demonstration of the InferX X1M M.2 Card Running Multi-camera Object Detection appeared first on Edge AI and Vision Alliance.

]]>
Flex Logix Demonstration of the InferX X1P1 Running Hardhat Detection with High Speed and Accuracy https://www.edge-ai-vision.com/2022/09/flex-logix-demonstration-of-the-inferx-x1p1-running-hardhat-detection-with-high-speed-and-accuracy/ Tue, 27 Sep 2022 13:24:58 +0000 https://www.edge-ai-vision.com/?p=38390 Salvador Alvarez, director of solutions architecture at Flex Logix Technologies, demonstrates the company’s latest edge AI and vision technologies and products at the 2022 Embedded Vision Summit. Specifically, Alvarez demonstrates the InferX X1 AI accelerator running a real-time high-resolution hardhat detection algorithm on the X1P1 PCIe accelerator in a standard industrial PC system. The X1P1 …

Flex Logix Demonstration of the InferX X1P1 Running Hardhat Detection with High Speed and Accuracy Read More +

The post Flex Logix Demonstration of the InferX X1P1 Running Hardhat Detection with High Speed and Accuracy appeared first on Edge AI and Vision Alliance.

]]>
Salvador Alvarez, director of solutions architecture at Flex Logix Technologies, demonstrates the company’s latest edge AI and vision technologies and products at the 2022 Embedded Vision Summit. Specifically, Alvarez demonstrates the InferX X1 AI accelerator running a real-time high-resolution hardhat detection algorithm on the X1P1 PCIe accelerator in a standard industrial PC system. The X1P1 accelerator provides a 83x speedup versus the algorithm running on the PC without acceleration.

The post Flex Logix Demonstration of the InferX X1P1 Running Hardhat Detection with High Speed and Accuracy appeared first on Edge AI and Vision Alliance.

]]>
Flex Logix Unveils First AI-Integrated Mini-ITX System to Simplify Edge and Embedded AI Deployment https://www.edge-ai-vision.com/2022/09/flex-logix-unveils-first-ai-integrated-mini-itx-system-to-simplify-edge-and-embedded-ai-deployment/ Tue, 13 Sep 2022 13:47:04 +0000 https://www.edge-ai-vision.com/?p=38010 InferX Hawk AI system reduces time to market, risk and costs with an AI Mini-ITX x86 system for new edge AI appliances and a drop-in upgrade for existing solutions MOUNTAIN VIEW, Calif., Sept. 13, 2022 /PRNewswire/ — Flex Logix® Technologies, Inc., supplier of high performance and efficient edge AI inference accelerators and the leading supplier of …

Flex Logix Unveils First AI-Integrated Mini-ITX System to Simplify Edge and Embedded AI Deployment Read More +

The post Flex Logix Unveils First AI-Integrated Mini-ITX System to Simplify Edge and Embedded AI Deployment appeared first on Edge AI and Vision Alliance.

]]>
InferX Hawk AI system reduces time to market, risk and costs with an AI Mini-ITX x86 system for new edge AI appliances and a drop-in upgrade for existing solutions

MOUNTAIN VIEW, Calif., Sept. 13, 2022 /PRNewswire/ — Flex Logix® Technologies, Inc., supplier of high performance and efficient edge AI inference accelerators and the leading supplier of eFPGA IP, today announced the InferX™ Hawk – a hardware and software-ready mini-ITX x86 system designed to help customers quickly and easily customize, build and deploy edge and embedded AI systems. The InferX Hawk system includes the Flex Logix InferX X1 AI accelerator chip, AMD Ryzen™ Embedded R2314 SoC, InferX Runtime software, and the EasyVision platform running Linux or Windows to deliver an integrated low power, high-performance AI system.

The AMD Ryzen Embedded R2314 delivers performance per watt efficiency using “Zen+” core architecture and Radeon™ Graphics. With the Hawk mini-ITX solution, customers can save over six months of hardware and software development time, additional system costs and power over NVIDIA and other solutions.

“Adding AI inference to a product can be a revenue-generating game changer and being able to leverage an established industry standard accelerates development and time-to-market,” said Barrie Mullins, VP of Product Management for Flex Logix. “The InferX Hawk system is an out-of-the-box solution that delivers increased performance, lower power and decreased costs over NVIDIA and other competitive solutions.”

“We designed our Ryzen Embedded R2000 Series to deliver the performance and functionality needed for emerging AI and machine learning applications,” said Rajneesh Gaur, corporate vice president and general manager, Adaptive & Embedded Computing Group at AMD. “Whether a customer is designing an industrial application, thin client, or mini-PCs, the ability to have high performance at optimized power and great graphics is a key competitive advantage.”

Target Markets

The InferX Hawk system is designed for a wide range of smart vision and video applications, many that are traditionally based on Windows. The Hawk system now offers edge AI developers flexibility to meet their customer needs with their operating system of choice and enables:

  • Safety and Security
    • Mask, personal protection equipment (PPE) detection, building access, data anonymization and privacy
  • Manufacturing and Industrial Optical Inspection
    • Employee safety, logistics and packaging, and inspection of parts, processes and quality
  • Traffic and Parking Management
    • Traffic junction monitoring, vehicle detection and counting, public and private parking structures, toll booths
  • Retail
    • Logistics, safety, consumer monitoring, automated checkout, and stock management
  • Healthcare
    • Medical image analytics, patient monitoring, mask detection, staff and facility access control and safety
  • Agriculture
    • Crop inspection, weed and pest detection, automated harvesting, yield and quality analysis, animal monitoring and health analysis
  • Robotics
    • First/last mile delivery, forklifts, tuggers, drones, and autonomous machines

The Hawk Advantage

The Hawk system leverages Flex Logix’s InferX accelerator, which is the industry’s most efficient AI inference chip for edge systems, offering a price/performance advantage over existing edge inference solutions. Customers using Hawk can also take advantage of Flex Logix’s EasyVision platform that provides ready-to-use models that are trained to perform the most common object detection capabilities such as hard-hat detection, people counting, face mask detection and license plate recognition.

Below are a few high-level technical features of the InferX Hawk system.

Processing:

  • Dual InferX X1 accelerators
  • AMD Quad-core Zen+ @ 2.1GHz
  • Hexa-core Radeon Vega GPU
  • Video Codec Accelerator inc. H.264, HEVC (H.265), VP9
  • Standard Mini-ITX Form Factor
  • 2x DDR4 SO-DIMM up to 32GB capacity

Standard I/O:

  • Dual Gigabit Ethernet
  • 2xUSB 3.1, 2 USB 2.0 all type A
  • 1xUSB 3.2 type C
  • 2xDisplay Port
  • Dual COM ports

Storage:

  • M.2 M Key for NVMe SSD and SATA
  • M.2 E-Key for Wi-Fi/LTE support
  • Internal SATA Gen3 connector

TDP Power:

  • 25W – 40W based on performance
  • Typical power is workload dependent

Dimensions:

  • 6.7″ x 6.7″ Mini ITX

About Flex Logix

Flex Logix is a reconfigurable computing company providing AI inference and eFPGA solutions based on software, systems and silicon. Its InferX™ X1 is the industry’s most-efficient AI edge inference accelerator that will bring AI to the masses in high-volume applications by providing much higher inference throughput per dollar and per watt. Flex Logix eFPGA enables volume FPGA users to integrate the FPGA into their companion SoC resulting in a 5-10x reduction in the cost and power of the FPGA and increasing compute density which is critical for communications, networking, data centers, and others. Flex Logix is headquartered in Mountain View, California and has offices in Austin, Texas and Vancouver, Canada. For more information, visit https://flex-logix.com.

The post Flex Logix Unveils First AI-Integrated Mini-ITX System to Simplify Edge and Embedded AI Deployment appeared first on Edge AI and Vision Alliance.

]]>
Flex Logix and CEVA Announce First Working Silicon of a DSP with Embedded FPGA to Allow a Flexible/Changeable ISA https://www.edge-ai-vision.com/2022/06/flex-logix-and-ceva-announce-first-working-silicon-of-a-dsp-with-embedded-fpga-to-allow-a-flexible-changeable-isa/ Mon, 27 Jun 2022 12:35:14 +0000 https://www.edge-ai-vision.com/?p=37059 Flex Logix® EFLX embedded FPGA brings reconfigurable computing to CEVA-X2 DSP instruction extension to support demanding and changing workloads MOUNTAIN VIEW, Calif. – June 27, 2022 – Flex Logix Technologies, Inc., the leading supplier of reconfigurable computing solutions, architecture and software, and CEVA, Inc. (NASDAQ:CEVA), the leading licensor of wireless connectivity and smart sensing technologies …

Flex Logix and CEVA Announce First Working Silicon of a DSP with Embedded FPGA to Allow a Flexible/Changeable ISA Read More +

The post Flex Logix and CEVA Announce First Working Silicon of a DSP with Embedded FPGA to Allow a Flexible/Changeable ISA appeared first on Edge AI and Vision Alliance.

]]>
Flex Logix® EFLX embedded FPGA brings reconfigurable computing to CEVA-X2 DSP instruction extension to support demanding and changing workloads

MOUNTAIN VIEW, Calif. – June 27, 2022Flex Logix Technologies, Inc., the leading supplier of reconfigurable computing solutions, architecture and software, and CEVA, Inc. (NASDAQ:CEVA), the leading licensor of wireless connectivity and smart sensing technologies and integrated IP solutions, have announced today the world’s first successful silicon implementation using Flex Logix’s EFLX® embedded FPGA (eFPGA) connected to a CEVA-X2 DSP instruction extension interface. Enabling flexible and changeable instruction sets to meet demanding and changing processing workloads, the ASIC, known as SOC2, was designed and taped out in a TSMC 16 nm technology by Bar-Ilan University SoC Lab, as part of the HiPer Consortium, backed by the Israeli Innovation Authority (IIA).

“The ability to add custom instructions to minimize power and maximize performance efficiency of embedded processors has been around for decades,” said Andy Jaros, VP of Sales and Marketing for Flex Logix’s eFPGA IP. “The ISA extension capability works great for targeted applications, but it can be a costly solution when the application changes or new use cases need different instructions requiring a new chip to be developed. By working with CEVA and the HiPer Consortium, the SOC2 proves that reconfigurable computing is here with a DSP Instruction Set Architecture (ISA) that can be adapted to different workloads with custom hard-wired instructions that can be changed at any time in the future.”

“Being part of the HiPer Consortium, we were excited to work with Bar-Ilan University SoC Lab team and Flex Logix to test out new capabilities for the CEVA-X2 DSP that had never been tried before,” said Erez Bar-Niv, CEVA’s Chief Technology Officer. “The SOC2 contains two processing clusters, each containing two CEVA-X2 DSP cores and EFLX eFPGA for programming and executing DSP instructions extensions, connected using the CEVA-Xtend mechanism. Flex Logix and CEVA’s mutual customers can now confidently utilize custom instructions to extract more value from their ASIC by being able to target different DSP applications on top of communication and sound with a customizable ISA post manufacturing.”

EFLX eFPGA can be used anywhere in an ASIC architecture. In addition to the ISA extension interface, EFLX has been used for packet processing, security, encryption, IO muxes, and general purpose algorithm acceleration. Using EFLX, chip developers can implement eFPGA from a few thousand LUTs to over a million LUTs with performance and density per square millimeter similar to leading FPGA companies in the same process generation. EFLX eFPGA is modular so arrays can be spread throughout the chip, can have all-logic or be heavy-DSP, and can integrate RAM. EFLX eFPGA is available today in popular 12, 16, 22, 28 and 40 nm process nodes and in development at 7 nm with more advanced nodes planned for future release.

Product briefs for EFLX eFPGA are available now at https://www.flex-logix.com/resources/

The CEVA-X2 is a multipurpose hybrid DSP and Controller based on a 5-way VLIW/SIMD architecture with a 10-stage pipeline, operating at over 1GHz at a 16nm process. As an advanced DSP optimized for intensive workloads, it has been specifically designed to tackle use-cases such as 5G PHY control, multi-microphone beamforming, AI processing and neural network implementations. CEVA-X2 supports various software needs using the extensive CEVA DSP Library, CEVA Neural Network Library, and a vast ecosystem partner offerings of software solutions for any application. For more information on CEVA’s DSP product offerings for communication and sound, based on CEVA-X2 and its successor CEVA-BX2, visit https://www.ceva-dsp.com/app/wireless-communication/ and https://www.ceva-dsp.com/app/audio-voice-and-speech/

About Flex Logix

Flex Logix is a reconfigurable computing company providing AI inference and eFPGA solutions based on software, systems and silicon. Its InferX X1 is the industry’s most-efficient AI edge inference accelerator that will bring AI to the masses in high-volume applications by providing much higher inference throughput per dollar and per watt. Flex Logix’s eFPGA platform enables chips to flexibly handle changing protocols, standards, algorithms, and customer needs and to implement reconfigurable accelerators that speed key workloads 30-100x compared to general purpose processors. Flex Logix is headquartered in Mountain View, California and has offices in Austin, Texas and Vancouver, Canada. For more information, visit https://flex-logix.com.

About CEVA, Inc.

CEVA is the leading licensor of wireless connectivity and smart sensing technologies and integrated IP solutions for a smarter, safer, connected world. We provide Digital Signal Processors, AI engines, wireless platforms, cryptography cores and complementary software for sensor fusion, image enhancement, computer vision, voice input and artificial intelligence. These technologies are offered in combination with our Intrinsix IP integration services, helping our customers address their most complex and time-critical integrated circuit design projects. Leveraging our technologies and chip design skills, many of the world’s leading semiconductors, system companies and OEMs create power-efficient, intelligent, secure and connected devices for a range of end markets, including mobile, consumer, automotive, robotics, industrial, aerospace & defense and IoT.

Our DSP-based solutions include platforms for 5G baseband processing in mobile, IoT and infrastructure, advanced imaging and computer vision for any camera-enabled device, audio/voice/speech and ultra-low-power always-on/sensing applications for multiple IoT markets. For sensor fusion, our Hillcrest Labs sensor processing technologies provide a broad range of sensor fusion software and inertial measurement unit (“IMU”) solutions for markets including hearables, wearables, AR/VR, PC, robotics, remote controls and IoT. For wireless IoT, our platforms for Bluetooth (low energy and dual mode), Wi-Fi 4/5/6/6E (802.11n/ac/ax), Ultra-wideband (UWB), NB-IoT and GNSS are the most broadly licensed connectivity platforms in the industry.

Visit us at www.ceva-dsp.com and follow us on Twitter, YouTube, Facebook, LinkedIn and Instagram.

The post Flex Logix and CEVA Announce First Working Silicon of a DSP with Embedded FPGA to Allow a Flexible/Changeable ISA appeared first on Edge AI and Vision Alliance.

]]>
“High-Efficiency Edge Vision Processing Based on Dynamically Reconfigurable TPU Technology,” a Presentation from Flex Logix https://www.edge-ai-vision.com/2022/06/high-efficiency-edge-vision-processing-based-on-dynamically-reconfigurable-tpu-technology-a-presentation-from-flex-logix/ Thu, 16 Jun 2022 08:00:03 +0000 https://www.edge-ai-vision.com/?p=36878 Cheng Wang, Senior Vice President and Co-founder of Flex Logix, presents the “High-Efficiency Edge Vision Processing Based on Dynamically Reconfigurable TPU Technology” tutorial at the May 2022 Embedded Vision Summit. To achieve high accuracy, edge computer vision requires teraops of processing to be executed in fractions of a second. Additionally, edge systems are constrained in …

“High-Efficiency Edge Vision Processing Based on Dynamically Reconfigurable TPU Technology,” a Presentation from Flex Logix Read More +

The post “High-Efficiency Edge Vision Processing Based on Dynamically Reconfigurable TPU Technology,” a Presentation from Flex Logix appeared first on Edge AI and Vision Alliance.

]]>
Cheng Wang, Senior Vice President and Co-founder of Flex Logix, presents the “High-Efficiency Edge Vision Processing Based on Dynamically Reconfigurable TPU Technology” tutorial at the May 2022 Embedded Vision Summit.

To achieve high accuracy, edge computer vision requires teraops of processing to be executed in fractions of a second. Additionally, edge systems are constrained in terms of power and cost. This talk presents and demonstrates the novel dynamic TPU array architecture of Flex Logix’s InferX X1 accelerators and contrasts it to current GPU, TPU and other approaches to delivering the teraops computing required by edge vision inferencing.

Wang compares latency, throughput, memory utilization, power dissipation and overall solution cost. He also shows how existing trained models can be easily ported to run on the InferX X1 accelerator.

See here for a PDF of the slides.

The post “High-Efficiency Edge Vision Processing Based on Dynamically Reconfigurable TPU Technology,” a Presentation from Flex Logix appeared first on Edge AI and Vision Alliance.

]]>