Please use this identifier to cite or link to this item:
http://hdl.handle.net/1942/35455
Title: | Task Scheduling in Cloud Using Deep Reinforcement Learning | Authors: | Swarup, S Shakshuki, EM YASAR, Ansar |
Issue Date: | 2021 | Publisher: | ELSEVIER SCIENCE BV | Source: | 12th International Conference on Ambient Systems, Networks and technologies (ANT)/ The 4th International Conference on Emerging data industry 4.0 (EDI40) / Affiliated Workshop, ELSEVIER SCIENCE BV, p. 42 -51 | Series/Report: | Procedia Computer Science | Abstract: | Cloud computing is an emerging technology used in many applications such as data analysis, storage, and Internet of Things (IoT). Due to the increasing number of users in the cloud and the IoT devices that are being integrated with the cloud, the amount of data generated by these users and these devices is increasing ceaselessly. Managing this data over the cloud is no longer an easy task. It is almost impossible to move all data to the cloud datacenters, and this will lead to excessive bandwidth usage, latency, cost, and energy consumption. This makes it evident that allocating resources to users' tasks is an essential quality feature in cloud computing. This is because it provides the customers or the users with high Quality of Service (Qo S) with the best response time, and it also respects the established Service Level Agreement. Therefore, there is a great importance of efficient utilization of computing resources for which an optimal strategy for task scheduling is required. This paper focuses on the problem of task scheduling of cloud-based applications and aims to minimize the computational cost under resource and deadline constraints. Towards this end, we propose a clipped double deep Q-learning algorithm utilizing the target network and experience relay techniques, as we as using the reinforcement learning approach. (C) 2021 The Authors. Published by Elsevier B.V. | Keywords: | task scheduling;computational cost;energy consumption;deep reinforcement learning;Clipped Double Deep Q-learning (CDDQL) | Document URI: | http://hdl.handle.net/1942/35455 | DOI: | 10.1016/j.procs.2021.03.016 | ISI #: | 000672800000005 | Rights: | 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)Peer-review under responsibility of the Conference Program Chairs | Category: | C1 | Type: | Proceedings Paper | Validations: | ecoom 2022 |
Appears in Collections: | Research publications |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
1-s2.0-S1877050921006281-main (1).pdf | Published version | 848.86 kB | Adobe PDF | View/Open |
WEB OF SCIENCETM
Citations
27
checked on Oct 14, 2024
Page view(s)
158
checked on Sep 7, 2022
Download(s)
210
checked on Sep 7, 2022
Google ScholarTM
Check
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.