Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification

Abstract

In recent years graph neural network (GNN)-based approaches have become a popular strategy for processing point cloud data, regularly achieving state-of-the-art performance on a variety of tasks. To date, the research community has primarily focused on improving model expressiveness, with secondary thought given to how to design models that can run efficiently on resource constrained mobile devices including smartphones or mixed reality headsets. In this work we make a step towards improving the efficiency of these models by making the observation that these GNN models are heavily limited by the representational power of their first, feature extracting, layer. We find that it is possible to radically simplify these models so long as the feature extraction layer is retained with minimal degradation to model performance; further, we discover that it is possible to improve performance overall on ModelNet40 and S3DIS by improving the design of the feature extractor. Our approach reduces memory consumption by 20x and latency by up to 9.9x for graph layers in models such as DGCNN; overall, we achieve speed-ups of up to 4.5x and peak memory reductions of 72.5%.

Publication
Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification
Avatar
Shyam Tailor
Machine Learning PhD Student

My research interests include enabling efficient on-device machine learning algorithms through hardware-software co-design, and exploring the applications enabled by these advances.