论文标题

即时的神经图形原始图,具有多种解决哈希编码

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

论文作者

Müller, Thomas, Evans, Alex, Schied, Christoph, Keller, Alexander

论文摘要

通过完全连接的神经网络参数化的神经图形原始图可能是训练和评估的昂贵的。我们通过一种多功能的新输入编码降低了这一成本,该编码允许使用较小的网络而不牺牲质量,从而大大减少了浮点和内存访问操作的数量:小神经网络可通过可训练的可训练特征矢量的多个可训练的功能向量增强,其值通过随机渐变降级优化了其值。多分辨率结构使网络可以消除歧义哈希碰撞,从而使一个简单的体系结构在现代GPU上并行化是微不足道的。我们通过使用完全融合的CUDA内核实施整个系统来利用这种并行性,重点是最大程度地减少浪费的带宽和计算操作。我们达到了几个数量级的组合加速,在几秒钟内实现了高质量的神经图形原语的训练,并以$ {1920 \!\!\!1080} $的分辨率进行了数十毫秒的渲染。

Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of ${1920\!\times\!1080}$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源