Abstract
Convolutional Neural Networks (CNN) are the dom-inating Machine Learning (ML) architecture used for complex tasks such as image classification despite their required usage of heavy computational resources, large storage space and power-demanding hardware. This motivates the exploration of alternative implementations using efficient neuromorphic hardware for resource constrained applications. Conventional Simplicial Piece-Wise Linear implementations allow the development of efficient hardware to run DNNs by avoiding multipliers, but demand large memory requirements. Symmetric Simplicial (SymSim) functions preserve the efficiency of the implementation while reducing the number of parameters per layer, and can be trained to replace convolutional layers and natively run non-linear filters such as MaxPool. This paper analyzes architectures to implement a Neural Network accelerator for SymSim operations optimizing the number of parallel cores to reduce the computational time. For this, we develop a model that takes into account the core processing times as well as the data transfer times.
Original language | English |
---|---|
Title of host publication | 2023 57th Annual Conference on Information Sciences and Systems (CISS) |
Pages | 1-6 |
Number of pages | 6 |
DOIs | |
Publication status | Published - 24 Mar 2023 |
Event | 2023 57th Annual Conference on Information Sciences and Systems (CISS) - Baltimore, MD, USA Duration: 22 Mar 2023 → 24 Mar 2023 |
Conference
Conference | 2023 57th Annual Conference on Information Sciences and Systems (CISS) |
---|---|
Period | 22/03/23 → 24/03/23 |
Keywords
- Neuromorphics
- Shape
- Computational modeling
- Neural networks
- Writing
- Data transfer
- Hardware