论文标题
Guardnn:安全加速器架构,以防止隐私进行深度学习
GuardNN: Secure Accelerator Architecture for Privacy-Preserving Deep Learning
论文作者
论文摘要
本文提出了Guardnn,这是一种安全的DNN加速器,即使在不信任的环境中,也为用户数据和模型参数提供了基于硬件的保护。 Guardnn表明,可以为特定应用程序定制体系结构和保护,以通过可忽略的开销提供强大的机密性和完整性保证。 Guardnn指令集的设计将TCB减少为加速器,即使无法信任主机的说明,也可以保密保护。 Guardnn通过自定义DNN加速器的已知内存访问模式的片外存储器保护,从而最大程度地减少了内存加密和完整性验证的开销。 Guardnn在FPGA上进行了原型,显示了有效的机密性保护,其性能为推理约3%。
This paper proposes GuardNN, a secure DNN accelerator that provides hardware-based protection for user data and model parameters even in an untrusted environment. GuardNN shows that the architecture and protection can be customized for a specific application to provide strong confidentiality and integrity guarantees with negligible overhead. The design of the GuardNN instruction set reduces the TCB to just the accelerator and allows confidentiality protection even when the instructions from a host cannot be trusted. GuardNN minimizes the overhead of memory encryption and integrity verification by customizing the off-chip memory protection for the known memory access patterns of a DNN accelerator. GuardNN is prototyped on an FPGA, demonstrating effective confidentiality protection with ~3% performance overhead for inference.