adversarial autoencoder
时间: 2023-04-25 13:01:30 浏览: 136
对抗自编码器(Adversarial Autoencoder)是一种基于生成对抗网络(GAN)的自编码器模型。它通过引入一个判别器网络来强制自编码器生成的样本与真实样本的分布相似,从而提高生成样本的质量。同时,它也可以用于无监督学习和数据压缩等任务。
相关问题
Compared with homogeneous network-based methods, het- erogeneous network-based treatment is closer to reality, due to the different kinds of entities with various kinds of relations [22– 24]. In recent years, knowledge graph (KG) has been utilized for data integration and federation [11, 17]. It allows the knowledge graph embedding (KGE) model to excel in the link prediction tasks [18, 19]. For example, Dai et al. provided a method using Wasser- stein adversarial autoencoder-based KGE, which can solve the problem of vanishing gradient on the discrete representation and exploit autoencoder to generate high-quality negative samples [20]. The SumGNN model proposed by Yu et al. succeeds in inte- grating external information of KG by combining high-quality fea- tures and multi-channel knowledge of the sub-graph [21]. Lin et al. proposed KGNN to predict DDI only based on triple facts of KG [66]. Although these methods have used KG information, only focusing on the triple facts or simple data fusion can limit performance and inductive capability [69]. Su et al. successively proposed two DDIs prediction methods [55, 56]. The first one is an end-to-end model called KG2ECapsule based on the biomedical knowledge graph (BKG), which can generate high-quality negative samples and make predictions through feature recursively propagating. Another one learns both drug attributes and triple facts based on attention to extract global representation and obtains good performance. However, these methods also have limited ability or ignore the merging of information from multiple perspectives. Apart from the above, the single perspective has many limitations, such as the need to ensure the integrity of related descriptions, just as network-based methods cannot process new nodes [65]. So, the methods only based on network are not inductive, causing limited generalization [69]. However, it can be alleviated by fully using the intrinsic property of the drug seen as local information, such as chemical structure (CS) [40]. And a handful of existing frameworks can effectively integrate multi-information without losing induction [69]. Thus, there is a necessity for us to propose an effective model to fully learn and fuse the local and global infor- mation for improving performance of DDI identification through multiple information complementing.是什么意思
这段文字讨论了异质网络方法相对于同质网络方法更接近现实的原因,并介绍了知识图谱和知识图谱嵌入模型在链接预测任务中的应用。作者提出了一些方法来解决多信息融合和归纳能力限制等问题,包括使用药物的局部信息如化学结构,以及利用多种信息来提高药物相互作用识别的性能。这段文字主要是关于药物相互作用预测领域的研究。
def define_gan(self): self.generator_aux=Generator(self.hidden_dim).build(input_shape=(self.seq_len, self.n_seq)) self.supervisor=Supervisor(self.hidden_dim).build(input_shape=(self.hidden_dim, self.hidden_dim)) self.discriminator=Discriminator(self.hidden_dim).build(input_shape=(self.hidden_dim, self.hidden_dim)) self.recovery = Recovery(self.hidden_dim, self.n_seq).build(input_shape=(self.hidden_dim, self.hidden_dim)) self.embedder = Embedder(self.hidden_dim).build(input_shape=(self.seq_len, self.n_seq)) X = Input(shape=[self.seq_len, self.n_seq], batch_size=self.batch_size, name='RealData') Z = Input(shape=[self.seq_len, self.n_seq], batch_size=self.batch_size, name='RandomNoise') # AutoEncoder H = self.embedder(X) X_tilde = self.recovery(H) self.autoencoder = Model(inputs=X, outputs=X_tilde) # Adversarial Supervise Architecture E_Hat = self.generator_aux(Z) H_hat = self.supervisor(E_Hat) Y_fake = self.discriminator(H_hat) self.adversarial_supervised = Model(inputs=Z, outputs=Y_fake, name='AdversarialSupervised') # Adversarial architecture in latent space Y_fake_e = self.discriminator(E_Hat) self.adversarial_embedded = Model(inputs=Z, outputs=Y_fake_e, name='AdversarialEmbedded') #Synthetic data generation X_hat = self.recovery(H_hat) self.generator = Model(inputs=Z, outputs=X_hat, name='FinalGenerator') # Final discriminator model Y_real = self.discriminator(H) self.discriminator_model = Model(inputs=X, outputs=Y_real, name="RealDiscriminator") # Loss functions self._mse=MeanSquaredError() self._bce=BinaryCrossentropy()
这段代码看起来是在定义一个生成对抗网络(GAN)的结构。代码中包含了生成器(Generator)、监督器(Supervisor)、判别器(Discriminator)、恢复器(Recovery)和嵌入器(Embedder)等模型的构建。
其中,生成器接收随机噪声作为输入,生成伪造的数据。监督器接收生成器生成的数据,并通过一些处理来生成更高质量的数据。判别器用于区分真实数据和伪造数据。恢复器通过将隐藏层的表示恢复为原始数据。嵌入器用于将原始数据转换为隐藏层的表示。
接下来,代码定义了三个不同的模型:自编码器(AutoEncoder)、在潜在空间中的对抗训练模型(Adversarial Supervise Architecture)和嵌入空间中的对抗训练模型(Adversarial Embedded)。其中自编码器用于将原始数据重构为自身。在潜在空间中的对抗训练模型和嵌入空间中的对抗训练模型分别用于在隐藏层的表示和嵌入空间中进行对抗训练。
此外,代码还定义了生成器模型和判别器模型,分别用于生成合成数据和判断真实数据。
最后,代码定义了均方误差(MeanSquaredError)和二元交叉熵(BinaryCrossentropy)作为损失函数。
请注意,这只是代码的一部分,无法完全了解整个模型的功能和训练过程。如果你需要更详细的解释或其他问题,请提供更多的上下文信息。
阅读全文