论文标题

深度状态空间模型中的可解释潜在变量

Interpretable Latent Variables in Deep State Space Models

论文作者

Wu, Haoxuan, Matteson, David S., Wells, Martin T.

论文摘要

我们介绍了一个新版本的深层状态空间模型(DSSM),该模型将经常性的神经网络与状态空间框架结合在一起,以预测时间序列数据。该模型将观察到的序列估计为潜在变量的函数,这些变量会随着时间的流逝而非线性地进化。由于DSSM固有的复杂性和非线性性,以前的DSSM上作品通常会产生非常难以解释的潜在变量。我们的论文专注于生产具有两个关键修改的可解释潜在参数。首先,我们通过将响应变量限制为潜在变量的线性转换以及一些噪声来简化预测解码器。其次,我们利用潜在变量上的收缩先验来降低冗余并提高鲁棒性。这些更改使潜在变量更容易理解,并使我们能够将所得的潜在变量解释为线性混合模型中的随机效果。我们通过两个公共基准数据集展示了由此产生的模型改善预测性能。

We introduce a new version of deep state-space models (DSSMs) that combines a recurrent neural network with a state-space framework to forecast time series data. The model estimates the observed series as functions of latent variables that evolve non-linearly through time. Due to the complexity and non-linearity inherent in DSSMs, previous works on DSSMs typically produced latent variables that are very difficult to interpret. Our paper focus on producing interpretable latent parameters with two key modifications. First, we simplify the predictive decoder by restricting the response variables to be a linear transformation of the latent variables plus some noise. Second, we utilize shrinkage priors on the latent variables to reduce redundancy and improve robustness. These changes make the latent variables much easier to understand and allow us to interpret the resulting latent variables as random effects in a linear mixed model. We show through two public benchmark datasets the resulting model improves forecasting performances.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源