Download PDFOpen PDF in browserA Novel Method for Communication-Efficient and Privacy-Preserving AI Model Generation and Optimization through Federated LearningEasyChair Preprint 1164614 pages•Date: December 28, 2023AbstractIn federated learning (FL), the idea is to train and bring out a single global model collaboratively with the aid of numerous client machines and devices while everything is being coordinated by a central server. However, given the variability of the data, developing a single global model could be problematic for some clients taking part in federated learning. Therefore, in order to deal with the difficulties brought in by statistical heterogeneity and the non-Informally, Identically Dis- tributed (IID) distribution of data, the personalization of the global model becomes essential. In contrast to the earlier research works, we suggest a novel method for creating a customized model. This further encourages all clients to take part in federation even in the presence of statistical heterogeneity. Such an arrangement is to enhance the performance as opposed to serving just a resource for the central server’s model training. In order to achieve this personalization, we use hybrid pruning which is a combination of structured and unstructured pruning to identify a small subnetwork for each client. Each pruning technique has been implemented based on the sparsity %. In this proposed work, we have shown the experimental implementation of pruning techniques and their evaluation to reduce the communication cost. This work will also help FL process to work on low bandwidth of the Internet connection. Keyphrases: Federated Learning, Model Compression, Model Compression., Model accuracy and performance, Pruning and Quantization
|