Architecture Analysis for Symmetric Simplicial Deep Neural Networks on Chip

Nicolás Rodríguez, Martín Villemur, Pedro Julián

Publikation: Konferenzband/Beitrag in Buch/BerichtKonferenzartikelBegutachtung

Abstract

Convolutional Neural Networks (CNN) are the dom-inating Machine Learning (ML) architecture used for complex tasks such as image classification despite their required usage of heavy computational resources, large storage space and power-demanding hardware. This motivates the exploration of alternative implementations using efficient neuromorphic hardware for resource constrained applications. Conventional Simplicial Piece-Wise Linear implementations allow the development of efficient hardware to run DNNs by avoiding multipliers, but demand large memory requirements. Symmetric Simplicial (SymSim) functions preserve the efficiency of the implementation while reducing the number of parameters per layer, and can be trained to replace convolutional layers and natively run non-linear filters such as MaxPool. This paper analyzes architectures to implement a Neural Network accelerator for SymSim operations optimizing the number of parallel cores to reduce the computational time. For this, we develop a model that takes into account the core processing times as well as the data transfer times.
OriginalspracheEnglisch
Titel2023 57th Annual Conference on Information Sciences and Systems (CISS)
Seiten1-6
Seitenumfang6
DOIs
PublikationsstatusVeröffentlicht - 24 März 2023
Veranstaltung2023 57th Annual Conference on Information Sciences and Systems (CISS) - Baltimore, MD, USA
Dauer: 22 März 202324 März 2023

Konferenz

Konferenz2023 57th Annual Conference on Information Sciences and Systems (CISS)
Zeitraum22/03/2324/03/23

Fingerprint

Untersuchen Sie die Forschungsthemen von „Architecture Analysis for Symmetric Simplicial Deep Neural Networks on Chip“. Zusammen bilden sie einen einzigartigen Fingerprint.

Dieses zitieren