REINFORCEMENT LEARNING METHOD FOR AUTONOMOUS FLIGHT PATH PLANNING OF MULTIPLE UAVS
DOI:
https://doi.org/10.31891/csit-2025-2-20Keywords:
multiple UAVs, path planning, reinforcement learning, centralized training, decentralized execution, PPO algorithm, RNN, CTDE architectureAbstract
This study aims to develop a reinforcement learning method for autonomous flight path planning of multiple UAVs under real-world conditions with limited observations and multiple conflicting optimization objectives. The research proposes a multi-agent reinforcement learning approach based on Proximal Policy Optimization (PPO) combined with centralized training and decentralized execution (CTDE). Additionally, a recurrent neural network (RNN) layer is integrated into the critic and actor networks to address partial observability. The reward function is designed to balance time efficiency, safety, and area coverage. Experimental results demonstrate that the proposed method significantly outperforms independent learning approaches in terms of reward accumulation, convergence speed, and decision stability. The CTDE architecture with RNN-enhanced critics proved effective in handling the challenges of multi-agent coordination and partial observability. The trained model enables real-time trajectory planning in three-dimensional environments, surpassing traditional optimization methods. The novelty lies in the application of a multi-agent PPO architecture enhanced by RNNs under CTDE for solving real-time multi-objective optimization problems in UAV path planning. A customized reward structure was developed to simultaneously optimize safety, time, and coverage objectives without retraining. The developed method enables efficient and reliable online trajectory planning for UAV groups, making it applicable in surveillance, search and rescue, and exploration missions where rapid and adaptive decision-making is essential.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Максим ВЕЛИЧКО, Тетяна КИСІЛЬ

This work is licensed under a Creative Commons Attribution 4.0 International License.