论文标题
基于隐性语义的个性化微视频推荐
Implicit semantic-based personalized micro-videos recommendation
论文作者
论文摘要
随着移动互联网和大数据的快速发展,网络中会生成大量数据,但是用户对一部分非常感兴趣的数据。为了从大量数据中提取用户感兴趣的信息,需要解决信息过载问题。在移动互联网时代,应将用户的特征和其他信息组合成大量数据,以快速而准确地向用户推荐内容,以满足用户的个性化需求。因此,迫切需要在成千上万的微型视频中实现高速有效的检索。视频数据内容包含复杂的含义,视频数据之间存在固有的连接。对于多模式信息,介绍了子空间编码学习,以构建从公共潜在表示到多模式特征信息的编码网络,考虑到每种模式下信息的一致性和互补性,以获得完整特征值的公共表示。提出了基于深度学习和注意力机制的端到端重新排序模型,称为基于多模式数据的兴趣相关产品相似性模型,以提供顶级N建议。多模式特征学习模块,与兴趣相关的网络模块和产品相似性建议模块一起形成新模型。通过对公共可访问的数据集进行广泛的实验,结果证明了我们提出的算法的最新性能及其有效性。
With the rapid development of mobile Internet and big data, a huge amount of data is generated in the network, but the data that users are really interested in a very small portion. To extract the information that users are interested in from the huge amount of data, the information overload problem needs to be solved. In the era of mobile internet, the user's characteristics and other information should be combined in the massive amount of data to quickly and accurately recommend content to the user, as far as possible to meet the user's personalized needs. Therefore, there is an urgent need to realize high-speed and effective retrieval in tens of thousands of micro-videos. Video data content contains complex meanings, and there are intrinsic connections between video data. For multimodal information, subspace coding learning is introduced to build a coding network from public potential representations to multimodal feature information, taking into account the consistency and complementarity of information under each modality to obtain a public representation of the complete eigenvalue. An end-to-end reordering model based on deep learning and attention mechanism, called interest-related product similarity model based on multimodal data, is proposed for providing top-N recommendations. The multimodal feature learning module, interest-related network module and product similarity recommendation module together form the new model.By conducting extensive experiments on publicly accessible datasets, the results demonstrate the state-of-the-art performance of our proposed algorithm and its effectiveness.