论文标题
WSLREC:神经顺序推荐模型的弱监督学习
WSLRec: Weakly Supervised Learning for Neural Sequential Recommendation Models
论文作者
论文摘要
在隐式反馈数据中学习隐藏的用户项目相关性在现代推荐系统中起着重要作用。神经顺序推荐模型,该模型根据用户的历史行为,将学习用户项目的相关性作为一个顺序分类问题,以将未来行为中的项目与他人区分开来,因此由于其实质性的实际价值而对行业和学术都引起了很多兴趣。尽管取得了许多实际的成功,但我们认为,在隐式反馈数据中,用户行为的固有{\ bf不完整}和{\ bf inccurenacy}被忽略,并进行初步实验,以支持我们的主张。通过观察到与神经顺序推荐模型相比,观察到的无模型方法(例如行为重新定位(BR)和基于项目的协作过滤(ItemCF))触及了用户项目相关的不同部分,我们提出了一种新型的模型 - 敏锐性训练方法,称为WSLREC,称为WSLREC,该方法采用了三个阶段的框架:三个阶段的框架:三个阶段的框架:预先训练,$ $ $ $ k $ $ $ k $ $ $ $ $ $ $ $ $ $ $ $ nin nin nin and nin and nitun。 WSLREC通过在BR和ITEMCF(例如BR和ITECCF)的额外较弱的监管上进行预培训模型来解决不完整问题,而通过利用顶部$ K $挖掘来筛选出可靠的用户与弱相关性,从而解决了不准确的问题,从而解决了不准确的问题。在两个基准数据集和在线A/B测试上进行的实验验证了我们的主张的合理性,并证明了WSLREC的有效性。
Learning the user-item relevance hidden in implicit feedback data plays an important role in modern recommender systems. Neural sequential recommendation models, which formulates learning the user-item relevance as a sequential classification problem to distinguish items in future behaviors from others based on the user's historical behaviors, have attracted a lot of interest in both industry and academic due to their substantial practical value. Though achieving many practical successes, we argue that the intrinsic {\bf incompleteness} and {\bf inaccuracy} of user behaviors in implicit feedback data is ignored and conduct preliminary experiments for supporting our claims. Motivated by the observation that model-free methods like behavioral retargeting (BR) and item-based collaborative filtering (ItemCF) hit different parts of the user-item relevance compared to neural sequential recommendation models, we propose a novel model-agnostic training approach called WSLRec, which adopts a three-stage framework: pre-training, top-$k$ mining, and fine-tuning. WSLRec resolves the incompleteness problem by pre-training models on extra weak supervisions from model-free methods like BR and ItemCF, while resolves the inaccuracy problem by leveraging the top-$k$ mining to screen out reliable user-item relevance from weak supervisions for fine-tuning. Experiments on two benchmark datasets and online A/B tests verify the rationality of our claims and demonstrate the effectiveness of WSLRec.