Deployment Service for Scalable Distributed Deep Learning Training on Multiple Clouds

Javier Jorge, Germán Moltó, Damian Segrelles, João Fontes, and Miguel Guevara. Deployment Service for Scalable Distributed Deep Learning Training on Multiple Clouds. In Proceedings of the 11th International Conference on Cloud Computing and Services Science, pp. 135–142, SCITEPRESS - Science and Technology Publications, 2021.

Download

[1.7MB pdf]  [HTML] 

Abstract

This paper introduces a platform based on open-source tools to automatically deploy and provision a distributed set of nodes that conduct the training of a deep learning model. To this end, the deep learning framework TensorFlow will be used, as well as the Infrastructure Manager service to deploy complex infrastructures programmatically. The provisioned infrastructure addresses: data handling, model training using these data, and the persistence of the trained model. For this purpose, public Cloud platforms such as Amazon Web Services (AWS) and General-Purpose Computing on Graphics Processing Units (GPGPU) are employed to dynamically and efficiently perform the workflow of tasks related to training deep learning models. This approach has been applied to real-world use cases to compare local training versus distributed training on the Cloud. The results indicate that the dynamic provisioning of GPU-enabled distributed virtual clusters in the Cloud introduces great flexibility to cost-effectively train deep learning models

BibTeX Entry

@inproceedings{Jorge2021dss,
   abstract = {This paper introduces a platform based on open-source tools to automatically deploy and provision a distributed set of nodes that conduct the training of a deep learning model. To this end, the deep learning framework TensorFlow will be used, as well as the Infrastructure Manager service to deploy complex infrastructures programmatically. The provisioned infrastructure addresses: data handling, model training using these data, and the persistence of the trained model. For this purpose, public Cloud platforms such as Amazon Web Services (AWS) and General-Purpose Computing on Graphics Processing Units (GPGPU) are employed to dynamically and efficiently perform the workflow of tasks related to training deep learning models. This approach has been applied to real-world use cases to compare local training versus distributed training on the Cloud. The results indicate that the dynamic provisioning of GPU-enabled distributed virtual clusters in the Cloud introduces great flexibility to cost-effectively train deep learning models},
   author = {Javier Jorge and Germán Moltó and Damian Segrelles and João Fontes and Miguel Guevara},
   doi = {10.5220/0010359601350142},
   isbn = {978-989-758-510-4},
   booktitle = {Proceedings of the 11th International Conference on Cloud Computing and Services Science},
   pages = {135-142},
   publisher = {SCITEPRESS - Science and Technology Publications},
   title = {Deployment Service for Scalable Distributed Deep Learning Training on Multiple Clouds},
   url = {https://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0010359601350142},
   year = {2021}
}

Generated by bib2html.pl (written by Patrick Riley ) on Sat Mar 29, 2025 17:39:01