The paper considers a distributed version of deep reinforcement learning(DRL) for multi-agent decision-making process in the paradigm of federatedlearning . Since the deep neural network models in federated learning are trained locally and aggregated iteratively through a central server, frequentinformation exchange incurs a large amount of communication overheads . The paper proposes a utility function to consider the balance between reducingcommunication overheads and improving convergence performance . Meanwhile, thispaper develops two new optimization methods on top of variation-aware periodicaveraging methods: 1) the decay-based method which gradually decreases theweight of the model’s local gradients within the progress of local updating,and 2) the consensus-based algorithm . This paperalso provides novel convergence guarantees for both developed methods anddemonstrates their effectiveness and efficiency through theoretical analysisand numerical simulation results. This papalso provides a novel convergence guarantee

Author(s) : Xing Xu, Rongpeng Li, Zhifeng Zhao, Honggang Zhang

Links : PDF - Abstract

Code :

Keywords : convergence - methods - learning - agent - local -

Leave a Reply

Your email address will not be published. Required fields are marked *