Efficient and Systematic Partitioning of Large and Deep Neural Networks for Parallelization - Université d'Orléans Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Efficient and Systematic Partitioning of Large and Deep Neural Networks for Parallelization

Résumé

Deep neural networks (DNNs) are playing an increasingly important role in our daily life. Since the size of DNNs is continuously growing up, it is highly important to train them effectively by distributing computation on multiple connected devices. The efficiency of training depends on the quality of chosen parallelization strategy. Being able to find a good parallelization strategy for a DNN in a reasonable amount of time is not trivial. Previous research demonstrated the possibility to systematically generate good parallelization strategies. However, systematic partitioning still suffers from either a heavy preprocessing or poor quality of parallelization. In this paper, we take a purely symbolic analysis approach by leveraging the features of DNNs like dense tensor balanced computation. We propose the Flex-Edge Recursive Graph and the Double Recursive Algorithm, successfully limiting our parallelization strategy generation to a linear complexity with a good quality of parallelization strategy. The experiments show that our solution significantly reduces the parallelization strategy generation time from hours to seconds while maintaining the parallelization quality.
Fichier non déposé

Dates et versions

hal-03526611 , version 1 (14-01-2022)

Identifiants

Citer

Haoran Wang, Chong Li, Thibaut Tachon, Hongxing Wang, Sheng Yang, et al.. Efficient and Systematic Partitioning of Large and Deep Neural Networks for Parallelization. European Conference on Parallel Processing EuroPar 2021, 2021, Lisbon, Portugal. pp.201-216, ⟨10.1007/978-3-030-85665-6_13⟩. ⟨hal-03526611⟩
145 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More