Unlocking Interpretability in Signed Graph Neural Networks

Posted on Aug 18, 2024

Introduction

Understanding the workings of graph neural networks (GNNs) is crucial. My research addressed interpretability challenges in signed graphs by designing a self-explainable graph transformer model (SE-SGformer).

GNN Interpretability

Key Contributions

  1. Developed position encoding based on signed random walks.
  2. Achieved 2.2% improvement in prediction accuracy and a 73.1% increase in interpretability.
  3. Refined experiments and model designs on real-world datasets.

Impact

This work pushes the boundaries of explainable AI in graph-based tasks.