论文标题
A-LAQ:自适应懒惰的聚合量化梯度
A-LAQ: Adaptive Lazily Aggregated Quantized Gradient
论文作者
论文摘要
联合学习(FL)在解决机器学习问题的数据中发挥了重要作用,该问题与跨客户分布的数据。在FL中,为了减少客户端与服务器之间数据的通信开销,每个客户端都会传达本地FL参数,而不是本地数据。但是,当无线网络连接客户端和服务器时,客户端的通信资源限制可能会阻止完成FL迭代的培训。因此,已广泛研究了FL的沟通效率变体。懒惰的量化梯度(LAQ)是降低FL资源使用情况的有前途的沟通效率方法之一。但是,LAQ为所有迭代分配了固定数量的位,当迭代次数中等至高或收敛时,这可能是通信感知的。本文提出了自适应懒惰的量化量化梯度(A-LAQ),该方法通过在FL迭代期间分配了自适应数量的通信位来显着扩展LAQ。我们在能量构成条件下训练FL,并研究A-LAQ的收敛分析。实验结果强调,A-LAQ的表现优于LAQ,降低了$ 50 $ 50 $的沟通能源,测试准确性提高了11美元。
Federated Learning (FL) plays a prominent role in solving machine learning problems with data distributed across clients. In FL, to reduce the communication overhead of data between clients and the server, each client communicates the local FL parameters instead of the local data. However, when a wireless network connects clients and the server, the communication resource limitations of the clients may prevent completing the training of the FL iterations. Therefore, communication-efficient variants of FL have been widely investigated. Lazily Aggregated Quantized Gradient (LAQ) is one of the promising communication-efficient approaches to lower resource usage in FL. However, LAQ assigns a fixed number of bits for all iterations, which may be communication-inefficient when the number of iterations is medium to high or convergence is approaching. This paper proposes Adaptive Lazily Aggregated Quantized Gradient (A-LAQ), which is a method that significantly extends LAQ by assigning an adaptive number of communication bits during the FL iterations. We train FL in an energy-constraint condition and investigate the convergence analysis for A-LAQ. The experimental results highlight that A-LAQ outperforms LAQ by up to a $50$% reduction in spent communication energy and an $11$% increase in test accuracy.