Unlocking Interpretability in Signed Graph Neural Networks
Designing a self-explainable graph transformer model (SE-SGformer) to address interpretability challenges in signed graphs.
Introduction
Understanding the workings of graph neural networks (GNNs) is crucial for building trustworthy AI systems. My research addressed interpretability challenges in signed graphs — networks where edges can represent both positive and negative relationships — by designing a self-explainable graph transformer model called SE-SGformer.
Key Contributions
Signed Random Walk Position Encoding
Developed a novel position encoding scheme based on signed random walks that captures both positive and negative edge semantics within graph structures.
Prediction Accuracy
Achieved a 2.2% improvement in prediction accuracy compared to existing methods on real-world signed graph datasets.
Interpretability Boost
Achieved a 73.1% increase in model interpretability, making the decision-making process of signed GNNs much more transparent.
Impact
This work pushes the boundaries of explainable AI in graph-based tasks, providing researchers and practitioners with tools to better understand how signed GNNs arrive at their predictions. The SE-SGformer model demonstrates that high performance and interpretability are not mutually exclusive goals.