Literature Database Entry

yan2025reinforcement


Christopher Ankai Yan, "Reinforcement Learning for Task Scheduling in Vehicular Micro Clouds," Bachelor Thesis, School of Electrical Engineering and Computer Science (EECS), TU Berlin (TUB), February 2025. (Advisor: Agon Memedi; Referees: Falko Dressler and Thomas Sikora)


Abstract

As demand for computational power rises with new applications such as blockchain technology and machine learning, innovative solutions are needed to meet growing processing demands. This thesis explores the use of Vehicular Mirco Clouds (VMCs), in which vehicles with onboard computers form a cluster to provide task computation. A key challenge of task processing within a VMC is its dynamic nature, as vehicles only have a limited VMC dwell time. This requires efficient task-to-vehicle assignments that ensure completion of scheduled tasks before vehicle departures, while maximizing total completed tasks and adhering to task deadline constraints. Reinforcement Learning has shown significant results in handling complex and dynamic environments. This research applies Deep Q-Learning (DQL), a Reinforcement Learning algorithm, to the VMC task scheduling problem. Two DQL agents were trained in fixed and dynamic VMC environments of varying complexity. The scheduling performance of the two agents was evaluated against three benchmark scheduling policies: Earliest Deadline First (EDF), Lowest Complexity First (LCF) and Random, for two VMC traffic densities (low and high). Although both agents demonstrated limited training progress, the agent trained on the dynamic VMC displayed improved performance over the fixed VMC agent. Additionally, the dynamic VMC agent partly outperformed the Random policy in low and high VMC density, achieving fewer mean task interruptions and reducing mean task processing time by approximately 0.4 time units. Future research on tuning DQL and VMC parameters could provide deeper insights and improve scheduling efficiency.

Quick access

BibTeX BibTeX

Contact

Christopher Ankai Yan

BibTeX reference

@phdthesis{yan2025reinforcement,
    author = {Yan, Christopher Ankai},
    title = {{Reinforcement Learning for Task Scheduling in Vehicular Micro Clouds}},
    advisor = {Memedi, Agon},
    institution = {School of Electrical Engineering and Computer Science (EECS)},
    location = {Berlin, Germany},
    month = {2},
    referee = {Dressler, Falko and Sikora, Thomas},
    school = {TU Berlin (TUB)},
    type = {Bachelor Thesis},
    year = {2025},
   }
   
   

Copyright notice

Links to final or draft versions of papers are presented here to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted or distributed for commercial purposes without the explicit permission of the copyright holder.

The following applies to all papers listed above that have IEEE copyrights: Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

The following applies to all papers listed above that are in submission to IEEE conference/workshop proceedings or journals: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.

The following applies to all papers listed above that have ACM copyrights: ACM COPYRIGHT NOTICE. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept., ACM, Inc., fax +1 (212) 869-0481, or permissions@acm.org.

The following applies to all SpringerLink papers listed above that have Springer Science+Business Media copyrights: The original publication is available at www.springerlink.com.

This page was automatically generated using BibDB and bib2web.

Last modified: 2026-04-30