论文标题
从关系数据库中使用重复数据库提供深度学习模型
Serving Deep Learning Models with Deduplication from Relational Databases
论文作者
论文摘要
从关系数据库中提供深度学习模型有很大的好处。首先,从数据库中提取的功能无需转移到任何解耦的深度学习系统以进行推断,因此可以大大减少系统管理开销。其次,在关系数据库中,沿存储层次结构的数据管理与查询处理完全集成在一起,因此即使工作集大小超过可用内存,也可以继续进行模型服务。应用模型重复数据删除可以大大减少存储空间,内存足迹,缓存错过和推理延迟。但是,现有的数据删除技术不适用于关系数据库中的深度学习模型。他们不考虑对模型推断准确性的影响以及张量块和数据库页面之间的不一致性。这项工作提出了用于重复检测,页面包装和缓存的协同存储优化技术,以增强用于模型服务的数据库系统。我们在NetSDB(一个面向对象的关系数据库)中实现了建议的方法。评估结果表明,我们提出的技术大大提高了存储效率和模型推断潜伏期,并且在工作设置大小超过可用内存时,关系数据库的模型优于现有的深度学习框架。
There are significant benefits to serve deep learning models from relational databases. First, features extracted from databases do not need to be transferred to any decoupled deep learning systems for inferences, and thus the system management overhead can be significantly reduced. Second, in a relational database, data management along the storage hierarchy is fully integrated with query processing, and thus it can continue model serving even if the working set size exceeds the available memory. Applying model deduplication can greatly reduce the storage space, memory footprint, cache misses, and inference latency. However, existing data deduplication techniques are not applicable to the deep learning model serving applications in relational databases. They do not consider the impacts on model inference accuracy as well as the inconsistency between tensor blocks and database pages. This work proposed synergistic storage optimization techniques for duplication detection, page packing, and caching, to enhance database systems for model serving. We implemented the proposed approach in netsDB, an object-oriented relational database. Evaluation results show that our proposed techniques significantly improved the storage efficiency and the model inference latency, and serving models from relational databases outperformed existing deep learning frameworks when the working set size exceeds available memory.