Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/26396
Title: SW-SGD: The Sliding Window Stochastic Gradient Descent Algorithm
Authors: Chakroun, Imen
HABER, Tom 
Ashby, Thomas J.
Issue Date: 2017
Publisher: Elsevier Science BV
Source: Koumoutsakos, Pedro; Lees, Michael; Krzhizhanovskaya, Valeria; Dongarra, Jack; Sloot, Peter M. A. (Ed.). International conference on computational science (ICCS 2017), Elsevier Science BV,p. 2318-2322
Series/Report: Procedia Computer Science
Series/Report no.: 108
Abstract: Stochastic Gradient Descent (SGD, or 1-SGD in our notation) is probably the most popular family of optimisation algorithms used in machine learning on large data sets due to its ability to optimise efficiently with respect to the number of complete training set data touches (epochs) used. Various authors have worked on data or model parallelism for SGD, but there is little work on how SGD fits with memory hierarchies ubiquitous in HPC machines. Standard practice suggests randomising the order of training points and streaming the whole set through the learner, which results in extremely low temporal locality of access to the training set and thus, when dealing with large data sets, makes minimal use of the small, fast layers of memory in an HPC memory hierarchy. Mini-batch SGD with batch size n (n-SGD) is often used to control the noise on the gradient and make convergence smoother and more easy to identify, but this can reduce the learning efficiency wrt. epochs when compared to 1-SGD whilst also having the same extremely low temporal locality. In this paper we introduce Sliding Window SGD (SW-SGD) which uses temporal locality of training point access in an attempt to combine the advantages of 1-SGD (epoch efficiency) with n-SGD (smoother convergence and easier identification of convergence) by leveraging HPC memory hierarchies. We give initial results on part of the Pascal dataset that show that memory hierarchies can be used to improve SGD performance. (C) 2017 The Authors. Published by Elsevier B.V.
Notes: [Chakroun, Imen; Ashby, Thomas J.] IMEC, Kapeldreef 75, B-3001 Leuven, Belgium. [Haber, Tom] Expertise Ctr Digital Media, Wetenschapspk 2, B-3590 Diepenbeek, Belgium. [Chakroun, Imen; Haber, Tom; Ashby, Thomas J.] ExaSci Life Lab, Kapeldreef 75, B-3001 Leuven, Belgium.
Keywords: SGD; sliding window; machine learning; SVM; logistic regression;SGD; sliding window; machine learning; SVM; logistic regression
Document URI: http://hdl.handle.net/1942/26396
DOI: 10.1016/j.procs.2017.05.082
ISI #: 000404959000243
Rights: © 2017 The Authors. Published by Elsevier B.V
Category: C1
Type: Proceedings Paper
Validations: ecoom 2018
Appears in Collections:Research publications

Files in This Item:
File Description SizeFormat 
Haber.pdfPublished version473.7 kBAdobe PDFView/Open
Show full item record

SCOPUSTM   
Citations

8
checked on Sep 3, 2020

WEB OF SCIENCETM
Citations

12
checked on Apr 14, 2024

Page view(s)

132
checked on Sep 5, 2022

Download(s)

172
checked on Sep 5, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.