Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/39095
Full metadata record
DC FieldValueLanguage
dc.contributor.authorHuang, Kai-
dc.contributor.authorLi, Bowen-
dc.contributor.authorCHEN, Siang-
dc.contributor.authorCLAESEN, Luc-
dc.contributor.authorXi, Wei-
dc.contributor.authorChen , Junjian-
dc.contributor.authorJiang, Xiaowen-
dc.contributor.authorLiu, Zhili-
dc.contributor.authorXiong, Dongliang-
dc.contributor.authorYan, Xiaolang-
dc.date.accessioned2022-12-22T10:05:37Z-
dc.date.available2022-12-22T10:05:37Z-
dc.date.issued2023-
dc.date.submitted2022-12-22T09:29:23Z-
dc.identifier.citationIEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 42 (1) , p. 190 -203-
dc.identifier.issn0278-0070-
dc.identifier.urihttp://hdl.handle.net/1942/39095-
dc.description.abstractEEP neural networks (DNNs) have become a powerful algorithm in the region of artificial intelligence, and have shown outstanding performance across a variety of computer vision applications, including image classification [1], object detection [2], and super resolution [3]. However, the inference of DNNs requires vast computing and storage. It is a challenge to deploy the DNNs onto edge devices, which have stringent constraints on resources and energy.-
dc.language.isoen-
dc.publisher-
dc.rights2022 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information.-
dc.subject.otherAlgorithm-architecture codesign-
dc.subject.othercompression and acceleration-
dc.subject.otherneural networks-
dc.subject.otherquantization-
dc.subject.othersystolic array-
dc.titleStructured Term Pruning for Computational Efficient Neural Networks Inference-
dc.typeJournal Contribution-
dc.identifier.epage203-
dc.identifier.issue1-
dc.identifier.spage190-
dc.identifier.volume42-
local.bibliographicCitation.jcatA1-
local.publisher.place445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA-
local.type.refereedRefereed-
local.type.specifiedArticle-
dc.identifier.doi10.1109/TCAD.2022.3168506-
dc.identifier.isi000920800400015-
dc.identifier.eissn1937-4151-
local.provider.typeCrossRef-
local.uhasselt.internationalyes-
item.accessRightsRestricted Access-
item.fullcitationHuang, Kai; Li, Bowen; CHEN, Siang; CLAESEN, Luc; Xi, Wei; Chen , Junjian; Jiang, Xiaowen; Liu, Zhili; Xiong, Dongliang & Yan, Xiaolang (2023) Structured Term Pruning for Computational Efficient Neural Networks Inference. In: IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 42 (1) , p. 190 -203.-
item.fulltextWith Fulltext-
item.contributorHuang, Kai-
item.contributorLi, Bowen-
item.contributorCHEN, Siang-
item.contributorCLAESEN, Luc-
item.contributorXi, Wei-
item.contributorChen , Junjian-
item.contributorJiang, Xiaowen-
item.contributorLiu, Zhili-
item.contributorXiong, Dongliang-
item.contributorYan, Xiaolang-
crisitem.journal.issn0278-0070-
crisitem.journal.eissn1937-4151-
Appears in Collections:Research publications
Files in This Item:
File Description SizeFormat 
Structured_Term_Pruning_for_Computational_Efficient_Neural_Networks_Inference.pdf
  Restricted Access
Published version3.14 MBAdobe PDFView/Open    Request a copy
Show simple item record

WEB OF SCIENCETM
Citations

2
checked on May 18, 2024

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.