Federated Learning in Massive MIMO Networks: Convergence Analysis and Communication-Efficient Design
TIME: 3:30 PM
LOCATION: GMCS 314
SPEAKER: Tharm Ratnarajah, Electrical & Computer Engineering, San Diego State University
ABSTRACT:
In federated learning (FL), model weights must be updated at local users and the base station (BS) or server. These weights are subjected to uplink (UL) and downlink (DL) transmission errors due to the limited reliability of wireless channels. In this work, we investigate the impact of imperfections in both UL and DL links. First, for a multi-user massive multi-input-multi-output (mMIMO) 6G network, employing zero-forcing (ZF) and minimum mean-squared-error (MMSE) schemes, we analyze the estimation errors of weights for each round. A tighter convergence bound on the modelling error for the communication efficient FL algorithm is derived from the order of O (1/T s2), where s2 denotes the variance of overall communication error, including the quantization noise. The analysis shows that the reliability of DL links is more critical than that of UL links. The transmit power can be varied in the training process to reduce energy consumption. We also vary the number of local training steps, average codeword length after quantization and scheduling policy to improve communication efficiency. Simulations with image classification problems on MNIST, EMNIST and FMNIST datasets verify the derived bound and are useful to infer the minimum SNR required for successful convergence of the FL algorithm.
HOST: Sunil Kumar
VIDEO: