Integrating L0 regularization into Multi-layer Logical Perceptron for Interpretable Classification
Contributo in Atti di convegno
Data di Pubblicazione:
2025
Abstract:
Deep neural networks are widely used in practical applications of AI, however, their inner structure and complexity made them generally not easily interpretable. Model transparency and interpretability are key requirements in multiple scenarios where not only high performance is enough to adopt the proposed solution. In this work, we adapt a differentiable approximation of L0 regularization to a logic-based neural network, the Multi-layer Logical Perceptron (MLLP), and we evaluate its effectiveness in reducing the complexity of its interpretable discrete version, the Concept Rule Set (CRS), while preserving its performance. Results are compared to alternative heuristics, such as Random Binarization of the network weights, to assess whether better results can be achieved with a less-noisy technique that sparsifies the network based on the loss function rather than a random distribution.
Tipologia CRIS:
04.01 - Contributo in atti di convegno
Keywords:
Interpretable Classification; Logical Perceptron; Propositional Network
Elenco autori:
Jaimovitch-Lopez, G.; Bergamin, L.; Aiolli, F.; Confalonieri, R.
Link alla scheda completa:
Titolo del libro:
CEUR Workshop Proceedings
Pubblicato in: