论文标题
部分可观测时空混沌系统的无模型预测
Q-Ensemble for Offline RL: Don't Scale the Ensemble, Scale the Batch Size
论文作者
论文摘要
众所周知,培训大型神经网络是耗时的,学习时间需要数天甚至数周。为了解决这个问题,引入了大批量优化。这种方法表明,通过适当的学习速度调整的缩放小批量尺寸可以通过数量级加速训练过程。尽管长期的训练时间通常不是模型深线RL算法的主要问题,但最近引入了实现最新性能的Q-安装方法使此问题更加相关,尤其是延长了培训时间。在这项工作中,我们演示了这类方法如何受益于大批优化,这通常被深层离线RL社区所忽略。我们表明,缩放小批量的大小并天真地调整学习率,可以(1)Q-安装的尺寸减小,(2)对分发性能的惩罚更强,以及(3)改善的收敛时间,有效地将训练持续时间缩短了3-4 x倍。
Training large neural networks is known to be time-consuming, with the learning duration taking days or even weeks. To address this problem, large-batch optimization was introduced. This approach demonstrated that scaling mini-batch sizes with appropriate learning rate adjustments can speed up the training process by orders of magnitude. While long training time was not typically a major issue for model-free deep offline RL algorithms, recently introduced Q-ensemble methods achieving state-of-the-art performance made this issue more relevant, notably extending the training duration. In this work, we demonstrate how this class of methods can benefit from large-batch optimization, which is commonly overlooked by the deep offline RL community. We show that scaling the mini-batch size and naively adjusting the learning rate allows for (1) a reduced size of the Q-ensemble, (2) stronger penalization of out-of-distribution actions, and (3) improved convergence time, effectively shortening training duration by 3-4x times on average.