Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/39095
Title: Structured Term Pruning for Computational Efficient Neural Networks Inference
Authors: Huang, Kai
Li, Bowen
CHEN, Siang 
CLAESEN, Luc 
Xi, Wei
Chen , Junjian
Jiang, Xiaowen
Liu, Zhili
Xiong, Dongliang
Yan, Xiaolang
Issue Date: 2023
Publisher: 
Source: IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 42 (1) , p. 190 -203
Abstract: EEP neural networks (DNNs) have become a powerful algorithm in the region of artificial intelligence, and have shown outstanding performance across a variety of computer vision applications, including image classification [1], object detection [2], and super resolution [3]. However, the inference of DNNs requires vast computing and storage. It is a challenge to deploy the DNNs onto edge devices, which have stringent constraints on resources and energy.
Keywords: Algorithm-architecture codesign;compression and acceleration;neural networks;quantization;systolic array
Document URI: http://hdl.handle.net/1942/39095
ISSN: 0278-0070
e-ISSN: 1937-4151
DOI: 10.1109/TCAD.2022.3168506
ISI #: 000920800400015
Rights: 2022 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information.
Category: A1
Type: Journal Contribution
Appears in Collections:Research publications

Files in This Item:
File Description SizeFormat 
Structured_Term_Pruning_for_Computational_Efficient_Neural_Networks_Inference.pdf
  Restricted Access
Published version3.14 MBAdobe PDFView/Open    Request a copy
Show full item record

WEB OF SCIENCETM
Citations

2
checked on Apr 24, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.