Fast-track your Deep Learning Applications with TySOM

Developing a deep learning application, such as edge processing, to run on an FPGA is not without its challenges. Our TySOM™ embedded development kits, at the heart of which are some of Xilinx’s most powerful SoC FPGAs, and reference designs we’ve created can save you time.

Deep learning algorithms are becoming increasingly popular for ‘edge processing’ in IoT applications, where the goal is to have human-level accuracy in object recognition and classification. Examples of edge processes include face detection and recognition in security cameras, video classification, speech recognition, real-time multiple object tracking, character recognition, gesture recognition, financial forecasting and medical diagnostic systems.

Deep learning algorithms, as a subset of machine learning, are inspired by human brain neural networks.

Deploying biological neural network concepts in edge processing has proved effective in machines learning how to solve problems; for example, deciding if a picture is of a bicycle or a car. See figure 1.

 


Figure 1 – A never-before-seen picture of a bicycle can be deduced as being so (i.e. bicycle) because the machine has learned from training data comprising picture of bicycles and cars.

 

Convolutional Neural Networks (CNNs) have shown agile and reliable image detection and recognition for computer vision applications. Deep layers of such networks create a neural network which is used to create a model in deep learning.

 

Let’s Go

Implementing deep learning algorithms in IoT edge devices requires processing systems that support computationally heavy, multi-layer networks with low power consumption.

Thanks to their reconfigurability, parallelism and energy efficiency, FPGAs have already proved their worth in computationally intensive image- and voice-recognition applications. For these reasons, FPGAs are the logical choice for demanding edge-processing applications.

To aid in the design and verification of deep learning applications, Aldec has developed several reference designs for its TySOM embedded development kits. The kits feature Xilinx® Zynq™ all-programmable modules that combine FPGA with an ARM® Cortex processor.

The reference designs include Deep Learning Processing Units (DPU) implemented into the FPGA side of the Zynq device for acceleration which results in 45 fps for a 3-channel input. Indeed, the bigger the FPGA the more DPUs can be added which brings better performance. For example, Aldec’s TySOM-3A-ZU19EG embedded prototyping board has 1,143K logic cells which allows the implementation of multiple (1-3) DPUs, critical for many channel processing applications. See figure 2.


Figure 2 – Above, the architecture of an object detection reference design for TySOM.

 

Aldec also provides the reference designs including Xilinx SDx platforms for face detection, gesture detection, pedestrian detection and segmentation. All designs are tested using different inputs such as USB camera, FMC-ADAS (which connects to a blue eagle camera using FPD III link) and pre-recorded videos stored on SD card.

About Aldec

Headquartered in Henderson, Nevada, Aldec Inc. is an industry leader in Electronic Design Verification and offers a patented technology suite including: RTL Design, RTL Simulators, Hardware-Assisted Verification, SoC and ASIC Prototyping, Design Rule Checking, CDC Verification, IP Cores, Requirements Lifecycle Management, DO-254 Functional Verification and Military/Aerospace solutions. www.aldec.com

 

Aldec Europe

Tel: 01295 201 240

www.aldec.com

Check Also

Europe’s most comprehensive relay test labs from Panasonic Industry offer technical support to customers

Complete infrastructure and profound know-how for customised relay performance and safety At its European headquarters …