没有合适的资源?快使用搜索试试~ 我知道了~
首页深度学习驱动的图神经网络综述:革新与机遇
深度学习驱动的图神经网络综述:革新与机遇
需积分: 0 13 下载量 51 浏览量
更新于2023-04-28
收藏 1.21MB PDF 举报
《图神经网络:方法、应用与机遇》是一篇深入探讨图神经网络(GNN)的综述论文,发表在最近的一段时间内。该文章回顾了深度学习在近十年间对机器学习领域的重大影响,尤其是在计算机视觉、语音识别、自然语言处理等领域。这些传统上依赖于欧几里得空间表示的数据,在深度学习的推动下,已经取得了前所未有的先进性能。 在许多非欧几里得空间,如社交网络、化学分子结构、推荐系统等,图数据提供了更为直观和精准的表示方式。图中的节点代表实体,边则体现它们之间的关系和相互作用,使得GNN能够捕捉到复杂数据中的内在联系和模式。然而,传统的手工设计特征往往无法充分挖掘这些数据的潜力。 随着深度学习技术的发展,特别是卷积神经网络(CNN)和递归神经网络(RNN)的成功,研究人员开始探索如何将这些技术应用于图数据处理。论文详细介绍了图神经网络在不同学习场景中的应用,包括监督学习、无监督学习、半监督学习以及自监督学习。在监督学习中,GNN通过节点分类、图分类或属性预测等方式,从有标注的数据中学习并进行预测;在无监督学习中,GNN用于发现潜在的社区结构或数据聚类;半监督学习结合了少量标签信息,提高模型的泛化能力;而在自监督学习中,通过设计巧妙的预训练任务,GNN可以从无标签数据中学习丰富的图表示。 此外,论文还探讨了GNN的挑战和未来研究方向,如如何解决过拟合问题、如何设计更有效的图卷积操作、以及如何更好地处理图的动态变化等。《图神经网络》这篇综述论文为理解和利用图数据提供了一个全面的框架,对于从事机器学习、数据科学和人工智能领域的专业人士来说,是理解当前GNN技术及其潜在应用的重要参考资料。
资源详情
资源推荐
![](https://csdnimg.cn/release/download_crawler_static/21517099/bg7.jpg)
Graph Neural Networks: Methods, Applications, and Opportunities 000:7
sharpener that performs the inverse operation of smoothing and decode hidden states to create a
symmetric graph Auto-encoder.
3.2.2 Contrastive Learning. In addition to GAE, contrastive learning is used for graph representa-
tion learning in the unsupervised learning setting. Velickovic et al. [
169
] proposed Deep Graph
Infomax (DGI), which is extended from deep InfoMax presented by Hjelm et al. [
63
]. DGI maximizes
the mutual information across graph and node representations. Infograph presented by Sun et al.
[
156
], maximizes the mutual information between graph level representations and subgraph level
representations of various sizes, such as nodes, edges, and triangles, to learn graph representations.
Hassani and Khasahmadi’s multi-view [
59
] compares rst-order adjacency matrices representation
with graph diusion, achieving SOTA results on numerous graph learning challenges. Okuda et al.
[
122
] is a recent unsupervised graph representation learning for discovering common objects and
localization method for a set of particular object images.
3.2.3 Random Walk. The random walks have been proven to be scalable for the large networks
to capture the graph structure eciently by Perozzi et al. [
138
] proposed method call Deepwalk.
Moreover, random walks were demonstrated to be capable of compensating structural equivocation
(vertices with comparable local structures with similar embeddings) and equally (vertices with
similar embedding belonging to the same communities) [
41
]. Random walks have been coupled with
current language modeling representation learning methods to provide high-quality representations
of vertices are utilized for downstream learning tasks like vertex and edge prediction as shown
in Du et al. [
41
], and Perozzi et al. [
138
]. In addition, random walk-based techniques have been
expanded to capture subgraph embeddings in Adhikari et al. [
2
], and vertex representations in
heterogeneous graphs as in Dong et al. [40].
Table 2. Summary of existing GNNs based unsupervised learning methods. Abbreviations: NC=Node Clas-
sification, LP=Link Prediction, GC=Graph Classification, GCN=Graph convolutional network, NN=Neural
Network, GAT=Graph Aention Network, GAE=Graph Auto-encoder, RW=Random walk.
Paper Feature ex-
traction
Technique Task Key functionality
[122] RW CNN NC
Discover common object and localization method for a set of
particular object images.
[169] Contrastive CNN NC
Maximizing the mutual information between the graph repre-
sentations and node representations.
[156] Contrastive K-layer GCN
NC, LP,
GC
Graph-level representations.
[59] Contrastive GCN NC,GC Learning node and graph level representations.
[63] Contrastive NN NC New avenue for unsupervised learning of representations.
[163] RW NN NC, LP Low dimensional node embeddings in the huge graphs.
[57] RW
GCN &
LSTM
NC, GC Low dimensional node embeddings in the huge graphs.
[40] RW NN NC Node Representation Learning for Heterogeneous Networks.
[2] RW NN NC Formulate subgraph embedding problem.
[134] GAE NN NC, LP
Hyperbolic Representation Learning via Message Passing Auto-
encoders.
[91] GAE NN NC
Learning prerequisite chains in both the known and unknown
domains for acquiring knowledge
[79] GAE GCN LP
Learning the interpretable latent representations for the undi-
rected graphs.
[131] GAE GCN LP, GC
Representing graph-structured data in a low dimensional space
for graph analytics.
, Vol. 00, No. 00, Article 000. Publication date: 2021.
剩余34页未读,继续阅读
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
安全验证
文档复制为VIP权益,开通VIP直接复制
![](https://csdnimg.cn/release/wenkucmsfe/public/img/green-success.6a4acb44.png)