Please use this identifier to cite or link to this item: http://hdl.handle.net/1942/35455
Title: Task Scheduling in Cloud Using Deep Reinforcement Learning
Authors: Swarup, S
Shakshuki, EM
YASAR, Ansar 
Issue Date: 2021
Publisher: ELSEVIER SCIENCE BV
Source: 12TH INTERNATIONAL CONFERENCE ON AMBIENT SYSTEMS, NETWORKS AND TECHNOLOGIES (ANT) / THE 4TH INTERNATIONAL CONFERENCE ON EMERGING DATA AND INDUSTRY 4.0 (EDI40) / AFFILIATED WORKSHOPS, ELSEVIER SCIENCE BV, p. 42 -51
Series/Report: Procedia Computer Science
Series/Report no.: 184
Abstract: Cloud computing is an emerging technology used in many applications such as data analysis, storage, and Internet of Things (IoT). Due to the increasing number of users in the cloud and the IoT devices that are being integrated with the cloud, the amount of data generated by these users and these devices is increasing ceaselessly. Managing this data over the cloud is no longer an easy task. It is almost impossible to move all data to the cloud datacenters, and this will lead to excessive bandwidth usage, latency, cost, and energy consumption. This makes it evident that allocating resources to users' tasks is an essential quality feature in cloud computing. This is because it provides the customers or the users with high Quality of Service (Qo S) with the best response time, and it also respects the established Service Level Agreement. Therefore, there is a great importance of efficient utilization of computing resources for which an optimal strategy for task scheduling is required. This paper focuses on the problem of task scheduling of cloud-based applications and aims to minimize the computational cost under resource and deadline constraints. Towards this end, we propose a clipped double deep Q-learning algorithm utilizing the target network and experience relay techniques, as we as using the reinforcement learning approach. (C) 2021 The Authors. Published by Elsevier B.V.
Keywords: task scheduling;computational cost;energy consumption;deep reinforcement learning;Clipped Double Deep Q-learning (CDDQL)
Document URI: http://hdl.handle.net/1942/35455
ISSN: 1877-0509
DOI: 10.1016/j.procs.2021.03.016
ISI #: 000672800000005
Rights: 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)Peer-review under responsibility of the Conference Program Chairs
Category: C1
Type: Proceedings Paper
Validations: ecoom 2022
Appears in Collections:Research publications

Files in This Item:
File Description SizeFormat 
1-s2.0-S1877050921006281-main (1).pdfPublished version848.86 kBAdobe PDFView/Open
Show full item record

WEB OF SCIENCETM
Citations

18
checked on Apr 24, 2024

Page view(s)

158
checked on Sep 7, 2022

Download(s)

210
checked on Sep 7, 2022

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.