RGC写作

Abstract

1 introduction

figure1 Clustering performance Comparison

2.1 Deep Graph clustering

2.2 Non-parametric Clustering and Cluster Number Estimation

3 REINFORCEMENT GRAPH CLUSTERING

3.1 Notation and problem

3.2 Encoding

3.3 Cluster Number Learning module

Figure 2: Reinforcement Graph Clustering (RGC)

3.4 Training

figure3 2D 𝑡-SNE visualization of seven methods

3.5 Complexity analysis

4 Experiment

4.1 Dataset

4.2 Experimental setup

4.3 Clustering Performance Comparison

4.4 Time Cost Comparison of 𝐾 estimation

Figure 4: Training time comparison of our proposed RGC and the 𝐾 estimation method ELBOW applied to deep graph clustering.

4.5 Effectiveness of Learning Cluster number

Figure 5: Clustering performance of the baseline trained with different cluster numbers 𝐾 on four datasets.
Figure 6: Determine cluster number with ELBOW cluster number estimation method.

4.6 Analysis

4.6.1 Hyper-parameter Analyses.

Due to the page limitation, we conduct hyper-parameters analyses in Appendix.

4.6.2 𝑡-SNE Visualization Analysis.

We visualize the samples in the latent space by 𝑡-SNE algorithm . As shown in Figure 3, the experiments of seven compared methods are conducted on CORA and AMAP datasets. From the visualization results, our proposed RGC can better learn the clustering structure compared with other methods.

4.6.3 Loss Convergence Analyses.

Due to the limitation of the main text, the loss convergence analyses are demonstrated in Appendix.

Figure 7: Hyper-parameter analysis of the trade-off parame- ter 𝛼 on four datasets.
Figure 8: Loss converge analysis on two datasets.

5 CONCLUSION

Figure 7: Hyper-parameter analysis

of the trade-off parame- ter 𝛼 on four datasets.

Figure 8: Loss converge analysis

on two datasets.