Introduction
Xilinx Versal is the newest category of advanced heterogeneous compute devices from Xilinx. Versal combines Scalar Engines, Adaptable Engines, and Intelligent Engines to enable software, hardware, and AI acceleration on a single device.
Key capabilities of Versal ACAPs (Adaptive Compute Acceleration Platforms):
- Software, hardware, and AI acceleration
- Multi-core scalar processors + FPGA-class programmability
- Network-on-Chip (NoC) connectivity
- Abundant hardened IPs, I/O, memory
- Advanced signal processing and connectivity
- Adaptable computing engines
- AI engines for AI inference and training
- Powerful development environment
Versal ACAPs aim to provide a massive boost in application performance combined with software and systems flexibility.
Versal Architecture
The Versal architecture consists of the following core components:
Scalar Engines
Dual-core Arm Cortex-R5 processors serve as the main application processors:
- Dual real-time low power Arm cores
- 32-bit at up to 300 MHz
- NEON media processing engine
- Memory management unit
- Floating point unit
- Cache coherent access to shared resources
The scalar engines execute the control-oriented parts of application software.
Adaptable Engines
Configurable engines provide FPGA-class programmability:
- Programmable logic cells
- Programmable interconnect
- Distributed memory and DSP blocks
- High-bandwidth network-on-chip
- 100G+ transceivers
- PCIe Gen 5, CCIX, and DDR5 interfaces
Adaptable engines accelerate application workloads in hardware to maximize performance.
Intelligent Engines
Dedicated AI engines provide machine learning inference and training acceleration:
- 1000’s of vector processors optimized for AI ops
- Low precision 8-bit to 32-bit math
- Sparse activation and network pruning
- Customizable topologies and numeric formats
- High bandwidth memory
- AI frameworks integration
AI acceleration allows deploying Versal for AI applications.
Network-on-Chip
An advanced network-on-chip connects all engines via high-throughput links:
- Multi-terabit memory coherent bandwidth
- Sub-microseconds latency
- QoS and security features
- Routing, muxing, and clock synchronization
The intelligent NoC is the backbone providing high-speed connectivity.
Versal AI Core Series
The first Versal ACAP products released are the AI Core series which feature the AI Engines prominently:
- AI Core series – AI7100, AI7200
- 8 AI Engines each with 1000’s of vector processors
- 32-bit floats & 8-/16-bit integer math
- Up to 26 TOPs INT8 performance
- High bandwidth memory
- AI framework and tools integration
- PCIe Gen5, CCIX connectivity
The AI Core series targets deploying Versal for AI inference and training.
Versal Prime Series
The Versal Prime series focuses on the Adaptive Compute Engines FPGA-based acceleration:
- Prime series – VM5100, VH2180
- Large capacity programmable logic
- Up to 123M logic cells
- Massive fine-grained DSP blocks
- 5600 Kb block RAM and 140Mb URAM
- 112G transceivers at 58-112Gbps
- PCIe Gen5, CCIX, and DDR5
Prime series provides vast hardware parallelism for workload acceleration.
Versal HBM Series
The Versal HBM series adds 4-8GB of HBM memory for big data applications:
- HBM series – VH2150, VH2250
- 4 or 8 GB of High Bandwidth Memory
- 460 GB/s memory bandwidth
- 4096-bit interface
- Integrated memory controllers
- AI acceleration capability
- Hybrid memory cube integration
High-capacity fast memory allows Versal to crunch big data.
Versal Portfolio
Versal combines differentiated product series for a range of applications:
Series | Description | Use Cases |
---|---|---|
AI Core | Prominent AI engines | AI inference/training |
Prime | Massive programmable logic | Hardware acceleration |
HBM | High memory bandwidth | Big data analytics |
Premium | Balanced engines | General purpose |
The product portfolio supports tailoring to specific acceleration needs.
Versal Design Tools
Xilinx provides the Vitis unified software development platform for Versal:
- Vitis – unified environment for software + hardware
- Hardware acceleration development
- Embedded software development
- AI inference development
- Simulation, emulation, profiling
- Libraries, models, and IP
- Works with popular open