Decentralizing machine learning based QoT estimation for sliceable optical networks

Decentralizing machine learning based QoT estimation for sliceable optical networks

Abstract:

Dynamic network slicing has emerged as a promising and fundamental framework for meeting 5G's diverse use cases. As machine learning (ML) is expected to play a pivotal role in the efficient control and management of these networks, in this work, we examine the ML-based quality-of-transmission (QoT) estimation problem under the dynamic network slicing context, where each slice has to meet a different QoT requirement. Specifically, we examine ML-based QoT frameworks with the aim of finding QoT model/s that are fine-tuned according to the diverse QoT requirements. Centralized and distributed frameworks are examined and compared according to their model accuracy, routing and spectrum allocation (RSA) accuracy, and CPU (training time) and RAM (memory) requirements. We show that the distributed QoT models outperform the centralized QoT model in accuracy and CPU usage. The RSA accuracy, i.e., measuring the accuracy of the models with regard to the QoT-aware RSA decisions, is sufficiently high for both frameworks. Regarding the RAM usage, as the distributed framework has to train in parallel several QoT models, it may require higher memory, especially as the number of diverse QoT requirements increases. This memory, however, tends to be reserved for a shorter period of time. Moreover, this work develops a dynamic multi-slice QoT-aware (RSA) framework that integrates the ML-based QoT models. The aim is to examine the network performance when the diverse QoT models are considered, as opposed to the state-of-the-art single-slice QoT-aware RSA approach where all connections/slices are provisioned according to a single QoT requirement. We show that the multi-slice QoT-aware RSA approach significantly improves network performance, a clear indicator that the commonly considered single-slice QoT-aware RSA approach may lead to connection overprovisioning.