Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/26396
Full metadata record
DC FieldValueLanguage
dc.contributor.authorChakroun, Imen-
dc.contributor.authorHABER, Tom-
dc.contributor.authorAshby, Thomas J.-
dc.date.accessioned2018-07-20T15:15:48Z-
dc.date.available2018-07-20T15:15:48Z-
dc.date.issued2017-
dc.identifier.citationKoumoutsakos, Pedro; Lees, Michael; Krzhizhanovskaya, Valeria; Dongarra, Jack; Sloot, Peter M. A. (Ed.). International conference on computational science (ICCS 2017), Elsevier Science BV,p. 2318-2322-
dc.identifier.issn1877-0509-
dc.identifier.urihttp://hdl.handle.net/1942/26396-
dc.description.abstractStochastic Gradient Descent (SGD, or 1-SGD in our notation) is probably the most popular family of optimisation algorithms used in machine learning on large data sets due to its ability to optimise efficiently with respect to the number of complete training set data touches (epochs) used. Various authors have worked on data or model parallelism for SGD, but there is little work on how SGD fits with memory hierarchies ubiquitous in HPC machines. Standard practice suggests randomising the order of training points and streaming the whole set through the learner, which results in extremely low temporal locality of access to the training set and thus, when dealing with large data sets, makes minimal use of the small, fast layers of memory in an HPC memory hierarchy. Mini-batch SGD with batch size n (n-SGD) is often used to control the noise on the gradient and make convergence smoother and more easy to identify, but this can reduce the learning efficiency wrt. epochs when compared to 1-SGD whilst also having the same extremely low temporal locality. In this paper we introduce Sliding Window SGD (SW-SGD) which uses temporal locality of training point access in an attempt to combine the advantages of 1-SGD (epoch efficiency) with n-SGD (smoother convergence and easier identification of convergence) by leveraging HPC memory hierarchies. We give initial results on part of the Pascal dataset that show that memory hierarchies can be used to improve SGD performance. (C) 2017 The Authors. Published by Elsevier B.V.-
dc.description.sponsorshipEuropean project ExCAPE [2] from the European Union's Horizon 2020 Research and Innovation programme [671555]-
dc.language.isoen-
dc.publisherElsevier Science BV-
dc.relation.ispartofseriesProcedia Computer Science-
dc.rights© 2017 The Authors. Published by Elsevier B.V-
dc.subject.otherSGD; sliding window; machine learning; SVM; logistic regression-
dc.subject.otherSGD; sliding window; machine learning; SVM; logistic regression-
dc.titleSW-SGD: The Sliding Window Stochastic Gradient Descent Algorithm-
dc.typeProceedings Paper-
local.bibliographicCitation.authorsKoumoutsakos, Pedro-
local.bibliographicCitation.authorsLees, Michael-
local.bibliographicCitation.authorsKrzhizhanovskaya, Valeria-
local.bibliographicCitation.authorsDongarra, Jack-
local.bibliographicCitation.authorsSloot, Peter M. A.-
local.bibliographicCitation.conferencedate12-14/07/2017-
local.bibliographicCitation.conferencenameInternational Conference on Computational Science (ICCS)-
local.bibliographicCitation.conferenceplaceZurich, Switzerland-
dc.identifier.epage2322-
dc.identifier.spage2318-
dc.identifier.volume108-
local.format.pages5-
local.bibliographicCitation.jcatC1-
dc.description.notes[Chakroun, Imen; Ashby, Thomas J.] IMEC, Kapeldreef 75, B-3001 Leuven, Belgium. [Haber, Tom] Expertise Ctr Digital Media, Wetenschapspk 2, B-3590 Diepenbeek, Belgium. [Chakroun, Imen; Haber, Tom; Ashby, Thomas J.] ExaSci Life Lab, Kapeldreef 75, B-3001 Leuven, Belgium.-
local.publisher.placeAmsterdam, The Netherlands-
local.type.refereedRefereed-
local.type.specifiedProceedings Paper-
local.relation.ispartofseriesnr108-
local.classdsPublValOverrule/author_version_not_expected-
local.type.programmeH2020-
local.relation.h2020671555-
dc.identifier.doi10.1016/j.procs.2017.05.082-
dc.identifier.isi000404959000243-
local.bibliographicCitation.btitleInternational conference on computational science (ICCS 2017)-
item.accessRightsOpen Access-
item.validationecoom 2018-
item.fulltextWith Fulltext-
item.contributorChakroun, Imen-
item.contributorHABER, Tom-
item.contributorAshby, Thomas J.-
item.fullcitationChakroun, Imen; HABER, Tom & Ashby, Thomas J. (2017) SW-SGD: The Sliding Window Stochastic Gradient Descent Algorithm. In: Koumoutsakos, Pedro; Lees, Michael; Krzhizhanovskaya, Valeria; Dongarra, Jack; Sloot, Peter M. A. (Ed.). International conference on computational science (ICCS 2017), Elsevier Science BV,p. 2318-2322.-
Appears in Collections:Research publications
Files in This Item:
File Description SizeFormat 
Haber.pdfPublished version473.7 kBAdobe PDFView/Open
Show simple item record

SCOPUSTM   
Citations

8
checked on Sep 3, 2020

WEB OF SCIENCETM
Citations

12
checked on Apr 30, 2024

Page view(s)

132
checked on Sep 5, 2022

Download(s)

172
checked on Sep 5, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.