Deep Learning is ubiquitous today and is increasingly moving from the cloud down to the edge of networked infrastructures, where it enables embedded applications to perform complex inference tasks close to the data sources, reducing long-distance data movement and alleviating the need for a powerful cloud infrastructure. Edge-class multi-processor system on chip (MPSoC) devices featuring an on-chip FPGA fabric offer key advantages for Deep Learning inference tasks, especially for complex applications where multiple models may be run concurrently in the same platform. In this work, we propose an approach and a practical framework for the systematic characterization of multithreaded Deep Learning inference on edge FPGA MPSoCs. We instantiate the framework into a real-world MPSoC platform, targeting Xilinx Vitis-AI as a representative example of a commercial Deep Learning acceleration toolkit for edge environments. We design a comprehensive experimental campaign and apply it to the platform for several convolutional neural networks, each trained on three different datasets. We show that our approach can be used for both hardware- and software-level analysis of a target system. Among other findings, the analysis revealed a suboptimal behavior of the underlying toolkit runtime, involving the utilization of the accelerator cores and the uneven software latency of the support library, influenced by the shapes of the input tensors.
An Approach to the Systematic Characterization of Multitask Accelerated CNN Inference in Edge MPSoCs / Cilardo, Alessandro; Maisto, Vincenzo; Mazzocca, Nicola; ROCCO DI TORREPADULA, Franca. - In: ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS. - ISSN 1539-9087. - 23:3(2024). [10.1145/3611015]
An Approach to the Systematic Characterization of Multitask Accelerated CNN Inference in Edge MPSoCs
Alessandro Cilardo;Vincenzo Maisto;Nicola Mazzocca;Franca Rocco Di Torrepadula
2024
Abstract
Deep Learning is ubiquitous today and is increasingly moving from the cloud down to the edge of networked infrastructures, where it enables embedded applications to perform complex inference tasks close to the data sources, reducing long-distance data movement and alleviating the need for a powerful cloud infrastructure. Edge-class multi-processor system on chip (MPSoC) devices featuring an on-chip FPGA fabric offer key advantages for Deep Learning inference tasks, especially for complex applications where multiple models may be run concurrently in the same platform. In this work, we propose an approach and a practical framework for the systematic characterization of multithreaded Deep Learning inference on edge FPGA MPSoCs. We instantiate the framework into a real-world MPSoC platform, targeting Xilinx Vitis-AI as a representative example of a commercial Deep Learning acceleration toolkit for edge environments. We design a comprehensive experimental campaign and apply it to the platform for several convolutional neural networks, each trained on three different datasets. We show that our approach can be used for both hardware- and software-level analysis of a target system. Among other findings, the analysis revealed a suboptimal behavior of the underlying toolkit runtime, involving the utilization of the accelerator cores and the uneven software latency of the support library, influenced by the shapes of the input tensors.| File | Dimensione | Formato | |
|---|---|---|---|
|
3611015-1.pdf
accesso aperto
Licenza:
Creative commons
Dimensione
3.52 MB
Formato
Adobe PDF
|
3.52 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


