论文标题
通过转化假设下的最佳估计来代表偏离知识的图形实体
Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions
论文作者
论文摘要
常规知识图完成(KGC)假设所有测试实体在培训过程中出现。但是,在实际情况下,知识图(kg)随着知识范围内(OOKG)实体的频繁添加而快速发展,我们需要有效地表示这些实体。大多数现有的知识图嵌入(KGE)方法不能代表OOKG实体,而无需在整个KG上进行昂贵的再培训。为了提高效率,我们提出了一种简单有效的方法,该方法通过转化假设下的最佳估计来归纳代表OOKG实体。鉴于知识核对图(IKG)实体的嵌入式嵌入,我们的方法无需额外学习。实验结果表明,我们的方法优于具有OOKG实体的两个KGC任务的最先进方法。
Conventional Knowledge Graph Completion (KGC) assumes that all test entities appear during training. However, in real-world scenarios, Knowledge Graphs (KG) evolve fast with out-of-knowledge-graph (OOKG) entities added frequently, and we need to represent these entities efficiently. Most existing Knowledge Graph Embedding (KGE) methods cannot represent OOKG entities without costly retraining on the whole KG. To enhance efficiency, we propose a simple and effective method that inductively represents OOKG entities by their optimal estimation under translational assumptions. Given pretrained embeddings of the in-knowledge-graph (IKG) entities, our method needs no additional learning. Experimental results show that our method outperforms the state-of-the-art methods with higher efficiency on two KGC tasks with OOKG entities.