论文标题
局部随机近似:联合学习和分布式多任务增强算法的统一观点
Local Stochastic Approximation: A Unified View of Federated Learning and Distributed Multi-Task Reinforcement Learning Algorithms
论文作者
论文摘要
我们在强化学习和联合学习中的广泛应用中,我们研究了代理网络上的本地随机近似,其目标是找到由代理商当地运营商组成的操作员的根源。我们的重点是在从马尔可夫过程中生成每个代理商的数据时表征该方法的有限时间性能,因此它们取决于它们。特别是,我们为恒定和随时间变化的步进大小提供局部随机近似的收敛速率。我们的结果表明,这些速率在独立数据下的对数因素之内。然后,我们说明这些结果在多任务增强学习和联合学习中的不同有趣问题中的应用。
Motivated by broad applications in reinforcement learning and federated learning, we study local stochastic approximation over a network of agents, where their goal is to find the root of an operator composed of the local operators at the agents. Our focus is to characterize the finite-time performance of this method when the data at each agent are generated from Markov processes, and hence they are dependent. In particular, we provide the convergence rates of local stochastic approximation for both constant and time-varying step sizes. Our results show that these rates are within a logarithmic factor of the ones under independent data. We then illustrate the applications of these results to different interesting problems in multi-task reinforcement learning and federated learning.