TY - GEN
T1 - RAILD
T2 - 11th International Joint Conference on Knowledge Graphs, IJCKG 2022
AU - Gesese, Genet Asefa
AU - Sack, Harald
AU - Alam, Mehwish
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/10/27
Y1 - 2022/10/27
N2 - Due to the open world assumption, Knowledge Graphs (KGs) are never complete. In order to address this issue, various Link Prediction (LP) methods are proposed so far. Some of these methods are inductive LP models which are capable of learning representations for entities not seen during training. However, to the best of our knowledge, none of the existing inductive LP models focus on learning representations for unseen relations. In this work, a novel Relation Aware Inductive Link preDiction (RAILD) is proposed for KG completion which learns representations for both unseen entities and unseen relations. In addition to leveraging textual literals associated with both entities and relations by employing language models, RAILD also introduces a novel graph-based approach to generate features for relations. Experiments are conducted with different existing and newly created challenging benchmark datasets and the results indicate that RAILD leads to performance improvement over the state-of-The-Art models. Moreover, since there are no existing inductive LP models which learn representations for unseen relations, we have created our own baselines and the results obtained with RAILD also outperform these baselines.
AB - Due to the open world assumption, Knowledge Graphs (KGs) are never complete. In order to address this issue, various Link Prediction (LP) methods are proposed so far. Some of these methods are inductive LP models which are capable of learning representations for entities not seen during training. However, to the best of our knowledge, none of the existing inductive LP models focus on learning representations for unseen relations. In this work, a novel Relation Aware Inductive Link preDiction (RAILD) is proposed for KG completion which learns representations for both unseen entities and unseen relations. In addition to leveraging textual literals associated with both entities and relations by employing language models, RAILD also introduces a novel graph-based approach to generate features for relations. Experiments are conducted with different existing and newly created challenging benchmark datasets and the results indicate that RAILD leads to performance improvement over the state-of-The-Art models. Moreover, since there are no existing inductive LP models which learn representations for unseen relations, we have created our own baselines and the results obtained with RAILD also outperform these baselines.
KW - Entity representations
KW - Inductive link prediction
KW - Knowledge graphs
KW - Relation representations
KW - Textual descriptions
UR - https://www.scopus.com/pages/publications/85148547698
U2 - 10.1145/3579051.3579066
DO - 10.1145/3579051.3579066
M3 - Conference contribution
AN - SCOPUS:85148547698
T3 - ACM International Conference Proceeding Series
SP - 82
EP - 90
BT - Proceedings of the 11th International Joint Conference on Knowledge Graphs, IJCKG 2022
A2 - Artale, Alessandro
A2 - Calvanese, Diego
A2 - Wang, Haofen
A2 - Zhang, Xiaowang
PB - Association for Computing Machinery
Y2 - 27 October 2022 through 28 October 2022
ER -