Self-Explaining Neural Network

Self-Explaining Neural Network

Interpretable network can accurately predict splicing and explain its reasoning

Neural networks, particularly deep learning models, are often likened to a black box due to their inherent opacity in operation. While these networks can achieve remarkable accuracy and predictive capabilities, understanding precisely how they arrive at specific decisions can be elusive. The interplay of weights, biases, and activations across potentially millions or even billions of neurons in multi-layer architectures makes it challenging to dissect and interpret the individual contributions of each element. This lack of transparency can pose issues, especially in fields where interpretability and explainability are crucial, such as healthcare or finance. While the network might provide an answer or prediction, the intricate and interconnected pathways leading to that conclusion remain hidden.

Announced October 6, 2023, a team of computer scientists at New York University has created a neural network that can explain how it reaches its predictions. The network focuses on using AI to examine biological questions like RNA splicing, which is important for transferring DNA information. Lead researcher Oded Regev says many neural networks are like black boxes, making it hard to trust their outputs, but their interpretable network can accurately predict splicing and explain its reasoning.

The researchers designed the network based on current knowledge of RNA splicing. It allows scientists to trace and quantify the splicing process from input to output. Regev says their "interpretable-by-design" approach provides insights into fundamental genomic processes like splicing. The model revealed a small hairpin RNA structure that decreases splicing.

The team confirmed the model's discovery through experiments. Whenever the RNA folded into a hairpin, splicing was stopped, but disrupting the structure restored splicing. The research was supported by grants from organizations like the NSF and Simons Foundation. The interpretable network offers new understanding of the machine learning behind AI systems.

Webdesk AI News : Self-Explaining Neural Network, October 6, 2023