Distributed Optimization Algorithms for Multi-Agent Mission Planning: Convergence Analysis and Performance Benchmarking

S. Ilavarasi, Dharani Jaganathan

Abstract


Multi-agent mission planning requires coordinated optimization of trajectories, task sequences, and resource allocation across distributed agents without centralized computation, demanding algorithms that converge to high-quality solutions while respecting communication and computational constraints. This paper presents comprehensive convergence analysis and performance benchmarking of distributed optimization algorithms for multi-agent planning problems. We systematically examine algorithmic approaches including distributed gradient descent and its accelerated variants, alternating direction method of multipliers (ADMM) decomposing coupled constraints, consensus-based optimization where agents iteratively refine solutions through local averaging, and dual decomposition methods exploiting problem structure. The study develops rigorous convergence analysis for each algorithm class, establishing convergence rates, optimality guarantees, and conditions under which global optima are achieved. Particular attention is devoted to non-convex mission planning problems where trajectory dynamics and collision avoidance constraints violate convexity, examining local convergence properties and escape mechanisms from suboptimal solutions. We analyze the impact of network topology on convergence, demonstrating how graph connectivity, diameter, and spectral properties affect algorithm performance. The paper presents extensive performance benchmarking across representative mission scenarios: coordinated area coverage optimizing sensor placement, multi-target tracking with dynamic task assignment, and collaborative payload delivery with timing constraints. Benchmark environments systematically vary problem scale (5-50 agents), coupling strength between agent decisions, communication graph topology, and presence of dynamic obstacles. Performance metrics encompass solution quality relative to centralized optimization, convergence time measured in iterations and wall-clock time, communication overhead quantified by message count and bandwidth, and scalability characteristics. Comparative results reveal that ADMM variants achieve superior solution quality with 5-15% optimality gaps but require more communication rounds, while consensus-based methods offer faster convergence with moderate optimality loss. We investigate asynchronous algorithm variants accommodating heterogeneous agent computation speeds and unreliable communication. The paper examines warm-starting strategies and adaptive parameter tuning for improved convergence.

Keywords


distributed optimization, mission planning, multi-agent systems, convergence analysis.

References


Chang, Tsung-Hui, Angelia Nedić, and Anna Scaglione. “Distributed Constrained Optimization by Consensus-Based Primal-Dual Perturbation Method”. IEEE Transactions on Automatic Control 59, no. 6 (2014): 1524–38. https://doi.org/10.1109/TAC.2014.2308612.

Chang, Tsung-Hui, Mingyi Hong, and Xiangfeng Wang. “Multi-Agent Distributed Optimization via Inexact Consensus ADMM”. IEEE Transactions on Signal Processing 63, no. 2 (2015): 482–97. https://doi.org/10.1109/TSP.2014.2367458.

Chang, Tsung-Hui. “A Proximal Dual Consensus ADMM Method for Multi-Agent Constrained Optimization”. IEEE Transactions on Signal Processing 64, no. 14 (2016): 3719–34. https://doi.org/10.1109/TSP.2016.2544743.

Li, Chaoyong, Sai Chen, Jianqing Li, and Feng Wang. “Distributed Multi-Step Subgradient Optimization for Multi-Agent System”. Systems & Control Letters 128 (2019): 26–33. https://doi.org/10.1016/j.sysconle.2019.04.008.

Li, Huaqing, Hao Zhang, Zheng Wang, Yifan Zhu, and Qi Han. “Distributed Consensus-Based Multi-Agent Convex Optimization via Gradient Tracking Technique”. Journal of the Franklin Institute 356, no. 6 (2019): 3733–61. https://doi.org/10.1016/j.jfranklin.2019.01.050.

Nedic, Angelia, and Asuman Ozdaglar. “Distributed Subgradient Methods for Multi-Agent Optimization”. IEEE Transactions on Automatic Control 54, no. 1 (2009): 48–61. https://doi.org/10.1109/TAC.2008.2009515.

Wang, Aijuan, Tao Dong, and Xiaofeng Liao. “Distributed Optimal Consensus Algorithms in Multi-Agent Systems”. Neurocomputing 339 (2019): 26–35. https://doi.org/10.1016/j.neucom.2019.01.044.

Wang, Dong, Hualing Ren, and Fubo Shao. “Distributed Newton Methods for Strictly Convex Consensus Optimization Problems in Multi-Agent Networks”. Symmetry 9, no. 8 (2017). https://doi.org/10.3390/sym9080163.

Yang, Tao, Xinlei Yi, Junfeng Wu, Ye Yuan, Di Wu, Ziyang Meng, Yiguang Hong, Hong Wang, Zongli Lin, and Karl H. Johansson. “A Survey of Distributed Optimization”. Annual Reviews in Control 47 (2019): 278–305. https://doi.org/10.1016/j.arcontrol.2019.05.006.

Yuan, Deming, Shengyuan Xu, and Huanyu Zhao. “Distributed Primal–Dual Subgradient Method for Multiagent Optimization via Consensus Algorithms”. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 41, no. 6 (2011): 1715–24. https://doi.org/10.1109/TSMCB.2011.2160394.

Zhang, Ruiliang, and James Kwok. “Asynchronous Distributed ADMM for Consensus Optimization”. In Proceedings of the 31st International Conference on Machine Learning, edited by Eric P. Xing and Tony Jebara, 32:1701–9. Proceedings of Machine Learning Research. Bejing, China: PMLR, 22--24 Jun 2014. Available at https://proceedings.mlr.press/v32/zhange14.html.


Refbacks

  • There are currently no refbacks.




Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.