Version 1
: Received: 7 July 2022 / Approved: 8 July 2022 / Online: 8 July 2022 (10:40:18 CEST)
Version 2
: Received: 8 June 2023 / Approved: 9 June 2023 / Online: 9 June 2023 (08:57:55 CEST)
Version 3
: Received: 26 October 2023 / Approved: 27 October 2023 / Online: 30 October 2023 (05:54:50 CET)
Version 4
: Received: 24 May 2024 / Approved: 27 May 2024 / Online: 27 May 2024 (05:47:24 CEST)
How to cite:
Zhang, Y.; Zhao, J.; Wu, W.; Muscoloni, A.; Cannistraci, C. V. Epitopological Sparse Ultra-Deep Learning: A Brain-Network Topological Theory Carves Communities in Sparse and Percolated Hyperbolic ANNs. Preprints2022, 2022070139. https://doi.org/10.20944/preprints202207.0139.v2
Zhang, Y.; Zhao, J.; Wu, W.; Muscoloni, A.; Cannistraci, C. V. Epitopological Sparse Ultra-Deep Learning: A Brain-Network Topological Theory Carves Communities in Sparse and Percolated Hyperbolic ANNs. Preprints 2022, 2022070139. https://doi.org/10.20944/preprints202207.0139.v2
Zhang, Y.; Zhao, J.; Wu, W.; Muscoloni, A.; Cannistraci, C. V. Epitopological Sparse Ultra-Deep Learning: A Brain-Network Topological Theory Carves Communities in Sparse and Percolated Hyperbolic ANNs. Preprints2022, 2022070139. https://doi.org/10.20944/preprints202207.0139.v2
APA Style
Zhang, Y., Zhao, J., Wu, W., Muscoloni, A., & Cannistraci, C. V. (2023). Epitopological Sparse Ultra-Deep Learning: A Brain-Network Topological Theory Carves Communities in Sparse and Percolated Hyperbolic ANNs. Preprints. https://doi.org/10.20944/preprints202207.0139.v2
Chicago/Turabian Style
Zhang, Y., Alessandro Muscoloni and Carlo Vittorio Cannistraci. 2023 "Epitopological Sparse Ultra-Deep Learning: A Brain-Network Topological Theory Carves Communities in Sparse and Percolated Hyperbolic ANNs" Preprints. https://doi.org/10.20944/preprints202207.0139.v2
Abstract
Sparse training (ST) aims to improve deep learning by replacing fully connected artificial neural networks (ANNs) with sparse ones, akin to the structure of brain networks. Therefore, it might benefit to borrow brain-inspired learning paradigms from complex network intelligence theory. Epitopological learning (EL) is a field of network science that studies how to implement learning on networks by changing the shape of their connectivity structure (epitopological plasticity). One way to implement EL is via link prediction: predicting the existence likelihood of nonobserved links in a network. Cannistraci-Hebb (CH) learning theory inspired the CH3-L3 network automata rule for link prediction which is effective for generalpurpose link prediction. Here, starting from CH3-L3 we propose Epitopological Sparse Ultra-deep Learning (ESUL) to apply EL into sparse training. In empirical experiments, we find that ESUL learns ANNs with sparse hyperbolic topology in which emerges a community layer organization that is ultra-deep (meaning that also each layer has an internal depth due to power-law node hierarchy). Furthermore, we discover that ESUL automatically sparse the neurons during training (arriving even to 30% neurons left in hidden layers), this process of node dynamic removal is called percolation. Then we design CH training (CHT), a training methodology that put ESUL at its heart, with the aim to enhance prediction performance. CHT consists of 4 parts: (i) correlated sparse topological initialization (CSTI), to initialize the network with a hierarchical topology; (ii) sparse weighting initialization (SWI), to tailor weights initialization to a sparse topology; (iii) ESUL, to shape the ANN topology during training; (iv) early stop with weight refinement, to tune only weights once the topology reaches stability. We conduct experiments on 6 datasets and 3 network structures (MLPs, VGG16, Transformer) comparing CHT to sparse training SOTA method and fully connected network. By significantly reducing the node size while retaining performance, CHT represents the first example of parsimony sparse training.
Computer Science and Mathematics, Artificial Intelligence and Machine Learning
Copyright:
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Commenter: Carlo Vittorio Cannistraci
Commenter's Conflict of Interests: Author