Abstract
Deep reinforcement learning (DRL) has recently revolutionized the resolution of decision-making and automated control problems. In the context of networking, there is a growing trend in the research community to apply DRL algorithms to optimization problems such as routing. However, existing proposals fail to achieve good results, often under-performing traditional routing techniques. We argue that the reason behind this poor performance is that they use straightforward representations of networks. In this paper, we propose a DRL-based solution for routing in optical transport networks (OTNs). Contrary to previous works, we propose a more elaborate representation of the network state that reduces the level of knowledge abstraction required for DRL agents and easily captures the singularities of network topologies. Our evaluation results show that using our novel representation, DRL agents achieve better performance and learn how to route traffic in OTNs significantly faster compared to state-of-the-art representations. Additionally, we reverse engineered the routing strategy learned by our DRL agent, and as a result, we found a routing algorithm that outperforms well-known traditional routing heuristics.
© 2019 Optical Society of America
Full Article | PDF ArticleMore Like This
Josh W. Nevin, Sam Nallaperuma, Nikita A. Shevchenko, Zacharaya Shabka, Georgios Zervas, and Seb J. Savory
J. Opt. Commun. Netw. 14(9) 733-748 (2022)
Yiran Teng, Carlos Natalino, Haiyuan Li, Ruizhi Yang, Jassim Majeed, Sen Shen, Paolo Monti, Reza Nejabati, Shuangyi Yan, and Dimitra Simeonidou
J. Opt. Commun. Netw. 16(7) C76-C87 (2024)
C. Hernández-Chulde, R. Casellas, R. Martínez, R. Vilalta, and R. Muñoz
J. Opt. Commun. Netw. 15(11) 925-937 (2023)