Deep multi-attributed-view graph representation learning
Graph representation learning (Graph embedding) aims at encoding a graph into a lower-dimensional feature space. Deep representation learning on the attributed graph utilizes both the graph structure and the graph attributes, which has shown significance in graph learning. Rich information in a graph can be expressed by attributes from different perspectives, which is employed as attributed views in the research field. Taking the social networks as an example, a user’s profiles and its posted contents can be regarded as two separate attributed views. The majority of existing attributed graph representation learning methods focus on single attributed view, which inherently limits the capability of the techniques to multi-attributed-view graphs. In this work, we present a novel model for generating representations with graphs containing multiple attributed views. The model, deep Multi-attributed-view graph Convolutional Autoencoder model (MagCAE), is built based on an unsupervised autoencoder framework with graph convolutional neural network layers. In addition, a novel multi-attributed-view proximity measurement and similarity loss function are proposed to further improve the effectiveness of generated embeddings. Extensive experiments and comparisons with 10 baselines on 5 real-world multi-attributed-view graphs demonstrate the superiority of MagCAE for link prediction with respect to average precision (AP) and area under the ROC curve (AUC), and node classification with respect to Micro and Macro F1-scores.