Do We Need Anisotropic Graph Neural Networks?

Abstract

Training and deploying graph neural networks (GNNs) remains difficult due to their high memory consumption and inference latency. In this work we present a new type of GNN architecture that achieves state-of-the-art performance with lower memory consumption and latency, along with characteristics suited to accelerator implementation. Our proposal uses memory proportional to the number of vertices in the graph, in contrast to competing methods which require memory proportional to the number of edges; we find our efficient approach actually achieves higher accuracy than competing approaches across 5 large and varied datasets against strong baselines. We achieve our results by using a novel adaptive filtering approach inspired by signal processing; it can be interpreted as enabling each vertex to have its own weight matrix, and is not related to attention. Following our focus on efficient hardware usage, we propose extit{aggregator fusion}, a technique to enable GNNs to significantly boost their representational power, with only a small increase in latency of 19% over standard sparse matrix multiplication.

Publication
Do We Need Anisotropic Graph Neural Networks?
Avatar
Shyam Tailor
Machine Learning PhD Student

My research interests include enabling efficient on-device machine learning algorithms through hardware-software co-design, and exploring the applications enabled by these advances.