IPUMS.org Home Page

BIBLIOGRAPHY

Publications, working papers, and other research using data resources from IPUMS.

Full Citation

Title: Strategies for Optomizing End-to-End Artifical Intelligence Pipelines on Intel Xeon Processors

Citation Type: Miscellaneous

Publication Year: 2022

Abstract: End-to-end (E2E) artificial intelligence (AI) pipelines are composed of several stages including data preprocessing, data ingestion, defining and training the model, hyperparameter optimization, deployment, inference, postprocessing, followed by downstream analyses. To obtain efficient E2E workflow, it is required to optimize almost all the stages of pipeline. Intel ® Xeon ® processors come with large memory capacities, bundled with AI acceleration (e.g., Intel Deep Learning Boost), well suited to run multiple instances of training and inference pipelines in parallel and has low total cost of ownership (TCO). To showcase the performance on Xeon processors, we applied comprehensive optimization strategies coupled with software and hardware acceleration on variety of E2E pipelines in the areas of Computer Vision, NLP, Recommendation systems, etc. We were able to achieve a performance improvement, ranging from 1.8x to 81.7x across different E2E pipelines. In this paper, we will be highlighting the optimization strategies adopted by us to achieve this performance on Intel ® Xeon ® processors with a set of eight different E2E pipelines.

Url: https://arxiv.org/pdf/2211.00286.pdf

User Submitted?: No

Authors: Arunachalam, Meena; Sanghavi, Vrushabh; Yao, Yi A; Zhou, Yi A; Wang, Lifeng A; Wen, Zongru; Ammbashankar, Niroop; Wang, Ning W; Mohammad, Fahim

Publisher:

Data Collections: IPUMS USA

Topics: Methodology and Data Collection, Population Data Science

Countries:

IPUMS NHGIS NAPP IHIS ATUS Terrapop