Combinatorial Learning of Robust Deep Graph Matching:
an Embedding based Approach

This project page includes code & additional result on our ICCV & PAMI papers.

Abstract

Abstract

Graph matching aims to establish node correspondence between two graphs, which has been a fundamental problem for its NP-complete nature. One practical consideration is the effective modeling of the affinity function in the presence of noise, such that the mathematically optimal matching result is also physically meaningful. This paper resorts to deep neural networks to learn the node and edge feature, as well as the affinity model for graph matching in an end-to-end fashion. The learning is supervised by combinatorial permutation loss over nodes. Specifically, the parameters belong to convolutional neural networks for image feature extraction, graph neural networks for node embedding that convert the structural (beyond second-order) information into node-wise features that leads to a linear assignment problem, as well as the affinity kernel between two graphs. Our approach enjoys flexibility in that the permutation loss is agnostic to the number of nodes, and the embedding model is shared among nodes such that the network can deal with varying numbers of nodes for both training and inference. Moreover, our network is class-agnostic. Experimental results on extensive benchmarks show its state-of-the-art performance. It bears some generalization capability across categories and datasets, and is capable for robust matching against outliers.

Authors
Paper & code
Approach
Permutation loss and Intra-graph Affinity based Graph Matching (PIA-GM) and
Permutation loss and Cross-graph Affinity based Graph Matching (PCA-GM)


Iterative Permutation loss and Cross-graph Affinity based Graph Matching (IPCA-GM)
Result

Experimental result on Pascal VOC Keypoint dataset without outliers:

Methodaerobikebirdboatbottlebuscarcatchaircowtabledoghorsembikepersonplantsheepsofatraintvmean
GMN31.947.251.940.868.772.253.652.834.648.672.347.754.851.038.675.149.545.083.086.355.3
GMN-PL 31.146.258.245.970.676.461.261.735.553.758.957.556.949.334.177.557.153.683.288.657.9
PIA-GM 41.555.860.951.975.075.859.665.233.365.962.862.767.762.142.980.264.359.582.790.163.0
PCA-GM51.261.361.658.478.873.968.571.140.163.345.164.466.462.245.179.168.460.080.391.964.6
IPCA-GM51.064.968.460.580.274.771.073.542.268.548.969.367.664.848.684.269.862.079.389.366.9

Experimental result on Pascal VOC Keypoint dataset with outliers:

Methodaerobikebirdboatbottlebuscarcatchaircowtabledoghorsembikepersonplantsheepsofatraintvmean
GMN34.2 55.046.439.677.060.546.954.531.751.048.048.048.550.828.873.849.838.369.483.951.8
GMN-PL38.956.147.941.079.166.549.057.933.754.443.749.553.555.431.276.653.037.871.386.454.1
PIA-GM43.860.651.543.575.470.658.962.035.354.444.357.156.158.640.076.560.136.576.186.357.4
PCA-GM44.663.653.745.978.069.552.763.137.656.444.458.356.257.539.080.159.640.269.487.157.8
IPCA-GM44.563.954.647.679.969.854.764.437.959.455.657.557.557.440.280.160.041.271.486.959.2

Visualization
Visualization result is provided on the testing set of 20 VOC categories.
Keypoints with the same color stands for predicted correspondence. And the white outer color means correct prediction, while the red outer color means incorrect prediction. (Click the image to zoom in)