Algorithm Acceleration

Accelerating Applications in Vitis

Target Audience

Anyone needing to accelerate software applications using FPGAs, SoCs (such as Zynq®-7000 SoCs, Zynq UltraScale+™ MPSoCs), and Versal® ACAPs

Description

This workshop provides an examination of how to develop, debug, and profile new or existing C/C++ and RTL applications in the Vitis™ unified software environment targeting both data center and embedded applications.

The emphasis is on:

  • Using OpenCL™ APIs to run hardware kernels on Alveo™ accelerator cards
  • Scheduling hardware kernels and controlling data movement by using OpenCL APIs and the Xilinx Runtime library for embedded platforms
  • Demonstrating the Vitis environment GUI flow and makefile flow for both data center and embedded applications
  • Describing the Vitis platform execution model and XRT
  • Describing kernel development using C/C++ and RTL
  • Analyzing reports with the Vitis analyzer tool
  • Design Optimization

Algorithm Optimization

Target Audience

Engineers and designers who wish to implement and optimize algorithms in AMD-Xilinx devices

Description

This workshop provides an introduction to using AMD-Xilinx platforms and devices to accelerate software algorithms. An exploration of how both software and hardware engineers can use the complete Xilinx development portfolio, including server solutions, hardware devices, and software platforms, to leverage their various skill sets and minimize development time on projects requiring algorithm acceleration

The emphasis is on:

  • Hardware and software solutions that may be used to develop and deploy highly accelerated algorithms for a wide variety of applications, including data analytics, machine learning and AI, video and image processing, finance, and genomics.
  • Xilinx accelerator cards for servers
  • Using Versal ACAPs for algorithm acceleration
  • New software tools and flows to improve productivity
  • Xilinx software emulation flows
  • Integrated hardware/software simulation flows and design methodology
  • How to explore, profile, and optimize host code and hardware acceleration kernels
  • Using code vectorization, compiler directives, and C coding style to achieve kernel optimization
  • Using AMD-Xilinx platforms in cloud and edge deployment solutions

Building Accelerator Kernels Using the Versal AI Engine

Target Audience

Software and hardware developers, system architects, and anyone who needs to accelerate their software applications using Versal devices

Description

This workshop provides a description of the Versal® AI Engine architecture, how to program the AI Engines (single kernel programming and multiple kernel programming using data flow graphs), the data communications between the PL and AI Engines, and how to analyze the kernel program using various debugger features.

The emphasis is on:

  • Illustrating the AI Engine architecture
  • Designing single AI Engine kernels using the Vitis™ unified software platform
  • Designing multiple AI kernels using data flow graphs with the Vitis IDE
  • Reviewing the data movement between AI Engines, between AI Engines via memory and DMA, and between AI Engines to programmable logic (PL)
  • Analyzing and debugging kernel performance
  • Implementing a system-level design flow (PS + PL + AIE) and the supported simulation
  • Using an interface for data movement between the PL and AI Engine
  • Utilizing AI Engine APIs and advanced MAC intrinsics to implement filters
  • Utilizing the AI Engine library for faster development
  • Applying advanced features for optimizing a system-level design
  • Optimizing AI Engine kernels using compiler directives, programming style, and efficient movement of data
  • Describing C++ kernel template functionality
  • Identifying the different types of kernel instance states
  • Prioritized AI Engine APIs over intrinsic functions
  • Designing and Vectorizing an AI Engine with System View Visual System Integrator
  • Programming a FIR filter using AI Engine APIs
  • Debugging applications using the Vitis unified software platform
  • Introduction to the AI Engine kernel encryption feature

Using Alveo Cards to Accelerate Dynamic Workloads

Target Audience

Anyone who needs to accelerate their software applications using AMD-Xilinx Alveo cards.

Description

This workshop provides an examination of how to use Alveo™ accelerator cards to achieve the highest performance, accelerate any workload, and deploy solutions in the cloud or on premises for data center workloads.

The focus is on:

  • Identifying the available Alveo accelerator cards and their advantages as well as the available software solutions stack
  • Learning how to run designs on Alveo Data Center accelerator cards using the Vitis™ unified software platform
  • Reviewing the available partner solutions in the cloud and on premises