论文标题
超越光谱差距:拓扑在分散学习中的作用
Beyond spectral gap: The role of the topology in decentralized learning
论文作者
论文摘要
在机器学习模型的数据并行优化中,工人协作以改善对模型的估计:更准确的梯度使他们可以使用更大的学习率并更快地优化。我们考虑所有工人从同一数据集进行采样的设置,并通过稀疏图(分散)进行通信。在这种情况下,当前的理论无法捕获现实世界行为的重要方面。首先,通信图的“光谱差距”不能预测其(深)学习中的经验表现。其次,当前的理论并不能解释合作能够比单独培训更大的学习率。实际上,它规定了较小的学习率,随着图表的变化而进一步降低,无法解释无限图中的收敛性。本文旨在在工人共享相同的数据分布时,精确地描绘出稀疏连接的分布式优化。我们量化图形拓扑如何影响二次玩具问题中的收敛性,并为一般平滑和(强烈)凸目标提供理论结果。我们的理论与深度学习中的经验观察相匹配,并准确地描述了不同图形拓扑的相对优点。
In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model: more accurate gradients allow them to use larger learning rates and optimize faster. We consider the setting in which all workers sample from the same dataset, and communicate over a sparse graph (decentralized). In this setting, current theory fails to capture important aspects of real-world behavior. First, the 'spectral gap' of the communication graph is not predictive of its empirical performance in (deep) learning. Second, current theory does not explain that collaboration enables larger learning rates than training alone. In fact, it prescribes smaller learning rates, which further decrease as graphs become larger, failing to explain convergence in infinite graphs. This paper aims to paint an accurate picture of sparsely-connected distributed optimization when workers share the same data distribution. We quantify how the graph topology influences convergence in a quadratic toy problem and provide theoretical results for general smooth and (strongly) convex objectives. Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies.