Knowledge-Graph-Embedding的Translate族(TransE,TransH,TransR,TransD)

data: WN18, WN11, FB15K, FB13, FB40K task: Knowledge Graph Embedding TransE Translating Embeddings for Modeling Multi-relational Data(2013) https://proceedings.neurips.cc/paper/2013/file/1cecc7a77928ca8133fa24680a88d2f9-Paper.pdf 这是转换模型系列的第一部作品。该模型的基本思想是使head向量和relation向量的和尽可能靠近tail向量。这里我们用L1或L2范数来衡量它们的靠近程度。 损失函数 $\mathrm{L}(h, r, t)=\max \left(0, d_{\text {pos }}-d_{\text {neg }}+\text { margin }\right)$使损失函数值最小化,当这两个分数之间的差距大于margin的时候就可以了(我们会设置这个值,通常是1) 但是这个模型只能处理一对一的关系,不适合一对多/多对一关系,例如,有两个知识,(skytree, location, tokyo)和(gundam, location, tokyo)。经过训练,“sky tree”实体向量将非常接近“gundam”实体向量。但实际上它们没有这样的相似性。 with tf.name_scope("embedding"): self.ent_embeddings = tf.get_variable(name = "ent_embedding", shape = [entity_total, size], initializer = tf.contrib.layers.xavier_initializer(uniform = False)) self.rel_embeddings = tf.get_variable(name = "rel_embedding", shape = [relation_total, size], initializer = tf.contrib.layers.xavier_initializer(uniform = False)) pos_h_e = tf.nn.embedding_lookup(self.ent_embeddings, self.pos_h) pos_t_e = tf.nn.embedding_lookup(self.ent_embeddings, self.pos_t) pos_r_e = tf.nn.embedding_lookup(self.rel_embeddings, self.pos_r) neg_h_e = tf.nn.embedding_lookup(self.ent_embeddings, self.neg_h) neg_t_e = tf.nn.embedding_lookup(self.ent_embeddings, self.neg_t) neg_r_e = tf.nn.embedding_lookup(self.rel_embeddings, self.neg_r) if config.L1_flag: pos = tf.reduce_sum(abs(pos_h_e + pos_r_e - pos_t_e), 1, keep_dims = True) neg = tf.reduce_sum(abs(neg_h_e + neg_r_e - neg_t_e), 1, keep_dims = True) self.predict = pos else: pos = tf.reduce_sum((pos_h_e + pos_r_e - pos_t_e) ** 2, 1, keep_dims = True) neg = tf.reduce_sum((neg_h_e + neg_r_e - neg_t_e) ** 2, 1, keep_dims = True) self.predict = pos with tf.name_scope("output"): self.loss = tf.reduce_sum(tf.maximum(pos - neg + margin, 0)) TransH Knowledge Graph Embedding by Translating on Hyperplanes(2014) ...

2020-03-05 · 5 min · Cong Chan