Please use this identifier to cite or link to this item:
http://hdl.handle.net/1942/44971
Title: | From Stationary to Nonstationary UAVs: Deep-Learning-Based Method for Vehicle Speed Estimation | Authors: | AHMED, Muhammad Waqas ADNAN, Muhammad Ahmed, Muhammad JANSSENS, Davy WETS, Geert Ahmed, Afzal ECTORS, Wim |
Issue Date: | 2024 | Source: | Algorithms, 17 (12) (Art N° 558) | Abstract: | Citation: Ahmed, M.W.; Adnan, M.; Ahmed, M.; Janssens, D.; Wets, G.; Ahmed, A.; Ectors, W. From Stationary to Nonstationary UAVs: Deep-Learning-Based Method for Vehicle Speed Estimation. Algorithms 2024, 17, 558. https://doi. Abstract: The development of smart cities relies on the implementation of cutting-edge technologies. Unmanned aerial vehicles (UAVs) and deep learning (DL) models are examples of such disruptive technologies with diverse industrial applications that are gaining traction. When it comes to road traffic monitoring systems (RTMs), the combination of UAVs and vision-based methods has shown great potential. Currently, most solutions focus on analyzing traffic footage captured by hovering UAVs due to the inherent georeferencing challenges in video footage from nonstationary drones. We propose an innovative method capable of estimating traffic speed using footage from both stationary and nonstationary UAVs. The process involves matching each pixel of the input frame with a georeferenced orthomosaic using a feature-matching algorithm. Subsequently, a tracking-enabled YOLOv8 object detection model is applied to the frame to detect vehicles and their trajectories. The geographic positions of these moving vehicles over time are logged in JSON format. The accuracy of this method was validated with reference measurements recorded from a laser speed gun. The results indicate that the proposed method can estimate vehicle speeds with an absolute error as low as 0.53 km/h. The study also discusses the associated problems and constraints with nonstationary drone footage as input and proposes strategies for minimizing noise and inaccuracies. Despite these challenges, the proposed framework demonstrates considerable potential and signifies another step towards automated road traffic monitoring systems. This system enables transportation modelers to realistically capture traffic behavior over a wider area, unlike existing roadside camera systems prone to blind spots and limited spatial coverage. | Keywords: | UAV;drone;traffic monitoring;computer vision;YOLO | Document URI: | http://hdl.handle.net/1942/44971 | e-ISSN: | 1999-4893 | DOI: | 10.3390/a17120558 | ISI #: | 001384211200001 | Rights: | © 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/). | Category: | A1 | Type: | Journal Contribution |
Appears in Collections: | Research publications |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
algorithms-17-00558-with-cover.pdf | Published version | 2.1 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.