论文标题
深度优化的多重描述通过标量量化学习图像编码
Deep Optimized Multiple Description Image Coding via Scalar Quantization Learning
论文作者
论文摘要
在本文中,我们介绍了通过最大程度地减少多重描述(MD)压缩损失来优化的深层多重描述(MDC)框架。首先,MD多尺度删除的编码器网络生成多个描述张量,这些张量被标量量化器离散,而这些量化的张量被MD cascaded-Resbaded-Resbocd-Resboct-Resboct-Resbrock解码器网络解压缩。为了大大减少人工神经网络参数的总量,由这两种类型的网络组成的自动编码器网络被设计为对称参数共享结构。其次,该自动编码器网络和一对标量量化器以端到端的自我监督方式同时学习。第三,考虑到图像空间分布的变化,每个标量量化器都伴随着一个重要的指示映射,以生成MD张量,而不是使用直接量化。第四,我们介绍了多个描述结构相似性距离损失,该距离损失隐含地正规化了多元化的多个描述一代,以明确监督多元描述除MD重建损失外多样化的解码。最后,我们证明,在几个常用数据集上测试时,我们的MDC框架的性能要比几种有关图像编码效率的最先进的MDC方法更好。
In this paper, we introduce a deep multiple description coding (MDC) framework optimized by minimizing multiple description (MD) compressive loss. First, MD multi-scale-dilated encoder network generates multiple description tensors, which are discretized by scalar quantizers, while these quantized tensors are decompressed by MD cascaded-ResBlock decoder networks. To greatly reduce the total amount of artificial neural network parameters, an auto-encoder network composed of these two types of network is designed as a symmetrical parameter sharing structure. Second, this autoencoder network and a pair of scalar quantizers are simultaneously learned in an end-to-end self-supervised way. Third, considering the variation in the image spatial distribution, each scalar quantizer is accompanied by an importance-indicator map to generate MD tensors, rather than using direct quantization. Fourth, we introduce the multiple description structural similarity distance loss, which implicitly regularizes the diversified multiple description generations, to explicitly supervise multiple description diversified decoding in addition to MD reconstruction loss. Finally, we demonstrate that our MDC framework performs better than several state-of-the-art MDC approaches regarding image coding efficiency when tested on several commonly available datasets.