Open Radio Access Network (O-RAN) is transforming the telecommunications landscape by enabling flexible, intelligent, and multi-vendor networks. Central to its architecture are xApps hosted on the Near-Real-Time RAN Intelligent Controller (Near-RT RIC), which optimize network functions in real time. However, the concurrent operation of multiple xApps with conflicting objectives can lead to suboptimal performance. This paper introduces a generalized Conflict Management scheme for Multi-Channel Power Control in O-RAN xApps (COMIX), designed to detect and resolve conflicts between xApps. To demonstrate COMIX, we focus on two Deep Reinforcement Learning (DRL)-based xApps for power control: one maximizes the data rare across UEs, and the other optimizes system-level energy efficiency. COMIX employs a standardized Conflict Mitigation Framework (CMF) for conflict detection and resolution and leverages the Network Digital Twin (NDT) to evaluate the impact of conflicting actions before applying them to the live network. We validate the framework using a realistic multi-channel power control scenario under various conflict resolution policies, demonstrating its effectiveness in balancing antagonistic objectives. Evaluation results show that COMIX achieves up to 60% energy savings across different Service-Level Agreement (SLA) policies compared to a baseline conflict-unaware system, with negligible impact (around 3%) on system throughput. While this study considers power control xApps, the COMIX framework is generalizable and can be applied to any xApp conflict scenario involving resource contention or KPI interdependence.
This work was supported in part by UNITY-6G Project, funded by EU HORIZON-JU-SNS-2024 Program, under Grant 101192650; and in part by ‘‘REACT-6G’’ Project, funded by HORIZON-JU-SNS-2022, 2nd 6G-SANDBOX Open Call, under Grant GA101096328.
This paper investigates the impact of randomly deployed non-orthogonal co-channel interference (CCI), originating from the information exchange process among non-orthogonal multiple access (NOMA) users, in an active, passive, and absorptive reconfigurable intelligent surface (RIS)-assisted dual-hop network. More specifically, the study considers that the information exchange process involves the source utilizing active, passive, or absorptive RIS architecture, along with a line of sight (LOS)/non-line of sight (NLOS) link between source and destination terminals. Additionally, this study considers the limited non-orthogonal CCI affecting the destination terminal in an independent and identically distributed (i.i.d.) non-orthogonal CCI scenario. Theoretical insights and Monte Carlo-based simulations collectively demonstrate that non-orthogonal CCI severely degrades system performance, particularly in high signal-to-noise ratio conditions, leading to notable losses in system coding gain. Meanwhile, results also reveal that increasing the number of RIS elements stabilizes the system and mitigates the impact of CCI on its performance.
This work was supported by the 6G-LEADER Project, funded by Smart Networks and Services Joint Undertaking, through the European Union’s Horizon Europe Research and Innovation Programme (6g-leader.eu) under Grant 101192080.
The increasing complexity of cloud-native 6 G networks necessitates intelligent resource management to optimize scalability, energy efficiency, and service reliability. This paper presents an AI-driven self-healing mechanism for dynamic server activation within the a cloud-native system. The proposed framework integrates three key frameworks: the Management and Orchestration Framework (MOF) for policy-based network service orchestration, the Cloud Continuum Framework (CCF) for dynamic resource scaling, and the Artificial Intelligence and Machine Learning Framework (AIMLF) for predictive analytics and anomaly detection. By leveraging AI models, the system continuously monitors workload variations, forecasts resource demand, and dynamically scales computing resources, ensuring optimal energy efficiency and SLA compliance. The proposed self-healing workflow enables proactive server activation and deactivation, addressing load bursts and underutilization scenarios. Numerical evaluations, including real-world traffic data analysis, demonstrate that our approach significantly improves power consumption, load balancing, and resource utilization compared to traditional static resource allocation methods.
This work was supported in part by the 6G-Cloud Project, funded from the European Union’s HORIZON-JU-SNS-2023 programme, under Grant Agreement No 101139073 (www.6g-cloud.eu).
The evolution of cloud computing towards a cloud continuum, including cloud, edge, and far-edge resources, is revolutionizing the deployment, management, and orchestration of Network Services (NSs) and applications. Traditional, centralized orchestration approaches are increasingly inadequate for handling the complexity, scale, and dynamic nature of this continuum. In this paper, we present a data-driven approach for AI-powered service orchestration based on the European 6G-CLOUD project. Specifically, we introduce the Decentralized Service Orchestrator (DSO) framework, an AI-powered, decentralized orchestration model that leverages the capabilities of the Artificial Intelligence and Machine Learning Framework (AI/MLF) to enable intelligent, autonomous, and scalable service lifecycle management across heterogeneous environments. Key contributions include the detailed architecture of the DSO, its workflows, and its integration with the Cloud Continuum and with an AI/MLF that manage the AI lifecycle, enabling models provision to the different components. By enabling decentralized AI-driven decision-making, this framework enhances service reliability, scalability, operational efficiency, and innovation acceleration, paving the way for next-generation cloud continuum orchestration.
This work has been partly funded by the European Commission through the SNS JU project 6G-CLOUD (Grant Agreement no. 101139073).
The rapid evolution of wireless communications has introduced new possibilities for the digital transformation of maritime operations. As 5G begins to take shape in selected nearshore and port environments, the forthcoming 6G promises to unlock transformative capabilities across the entire maritime domain, integrating Terrestrial/Non-Terrestrial Networks (TN/NTN) to form a space-air-ground-sea-underwater system. This paper presents a comprehensive review of how 6G-enabling technologies can be adapted to address the unique challenges of Maritime Communication Networks (MCNs). We begin by outlining a reference architecture for heterogeneous MCNs and reviewing the limitations of existing 5G deployments at sea. We then explore the key technical advancements introduced by 6G and map them to maritime use cases such as fleet coordination, just-in-time port logistics, and low-latency emergency response. Furthermore, the critical Artificial Intelligence/Machine Learning (AI/ML) concepts and algorithms are described to highlight their potential in optimizing maritime functionalities. Finally, we propose a set of resource optimization scenarios, including dynamic spectrum allocation, energy-efficient communications and edge offloading in MCNs, and discuss how AI/ML and learning-based methods can offer scalable, adaptive solutions. By bridging the gap between emerging 6G capabilities and practical maritime requirements, this paper highlights the role of intelligent, resilient, and globally connected networks in shaping the future of maritime communications.
This paper has been supported by the UNITY-6G project, funded by HORIZON-JU-SNS-2024, under Grant no. 101192650.
This paper presents a novel resource allocation and MEC server deactivation strategy for MEC-enabled O-RAN networks, with the objective of optimizing energy consumption while satisfying strict delay requirements. Our approach leverages an intelligent orchestration application, SLEEPY-rApp, deployed in the SMO layer to dynamically control MEC server activation and request routing. We formulate a joint optimization problem that simultaneously considers computing capacity, end-to-end delay, and energy consumption. To address the NP-hard nature of this problem in real time, we propose a low-complexity heuristic that adapts to varying network conditions and workload patterns. Simulation results indicate that our method significantly reduces energy usage—particularly during peak operating periods—while maintaining the required quality of service. These findings underscore the potential of intelligent orchestration in enhancing the energy efficiency of future MEC-enabled O-RAN systems.
This work has been supported by the UNITY-6G project, under Grant no. 101192650.
Cloud-Edge Computing Continuum (CEC) system, where edge and cloud nodes are seamlessly connected, is dedicated to handle substantial computational loads offloaded by end-users. These tasks can suffer from delays or be dropped entirely when deadlines are missed, particularly under fluctuating network conditions and resource limitations. The CEC is coupled with the need for hybrid task offloading, where the task placement decisions concern whether the tasks are processed locally, offloaded vertically to the cloud, or horizontally to interconnected edge servers. In this paper, we present a distributed hybrid task offloading scheme (HOODIE) designed to jointly optimize the tasks latency and drop rate, under dynamic CEC traffic. HOODIE employs a model-free deep reinforcement learning (DRL) framework, where distributed DRL agents at each edge server autonomously determine offloading decisions without global task distribution awareness. To further enhance the system pro-activity and learning stability, we incorporate techniques such as Long Short-term Memory (LSTM), Dueling deep Q-networks (DQN), and double-DQN. Extensive simulation results demonstrate that HOODIE effectively reduces task drop rates and average task processing delays, outperforming several baseline methods under changing CEC settings and dynamic conditions.
This work was supported by the EU HORIZON Research and Innovation Programme ENACT Project, under Grant 101135423.
Current and upcoming data-intensive Mission Critical (MC) applications rely on high Quality of Service (QoS) requirements related to connectivity, latency and network reliability. Beyond 5G networks shall accommodate MC services that enable voice, data and video transfer in extreme circumstances, for instance in occurrence of network overloads or infrastructure failures. In this work, we describe the specifications of the architectural framework that enables the roll-out of MC services over 5G networks and beyond, considering recent technological advancements of cloud-native functionalities, network slicing and edge deployments. The network architecture and the deployment process is described in three practical scenarios, including a capacity increase in the service load that necessitates the scaling of the computational resources, the deployment of a dedicated network slice for accommodating the stringent requirement of a MC application and a service migration scenario at the edge to cope with critical failures and QoS degradation. Furthermore, we illustrate the implementation of a Machine Learning (ML) algorithm that is used for overload prediction, validating its ability to predict the capacity increase and notify the components responsible to trigger the appropriate actions, based on a real dataset. To this end, we mathematically define the overload detection problem, as well as generalized prediction tasks in emergency situations and examine the key parameters (proactiveness ability, loockback window, etc.) of the ML model, also comparing its predictions abilities (~93% accuracy in overload detection) against multiple baseline classifiers. Finally, we demonstrate the flexibility of the ML model to achieve reliable predictions in scenarios with diverse requirements.
This work was partially supported by the “Service-oriented 6G network architecture for distributed, intelligent, and sustainable cloud-native communication systems (6G-Cloud)” project, funded by EU HORIZON-JUSNS-2023 program, under grant agreement No 101139073.
Efficient management of computational tasks in the cloud-edge continuum (CEC) is crucial for modern computing environments. This paper introduces the Prioritized Delay-aware and Peer-to-Peer Task Offloading (PDPPnet) scheme, employing multi-agent Double Dueling Deep Q-Networks (DDDQNs) to optimize task offloading in a distributed framework. PDPPnet uniquely addresses the challenges of uncertain load dynamics and task prioritization at edge nodes, enabling autonomous decision-making for non-divisible, delay-sensitive tasks without reliance on prior task models from other nodes. By formulating a multi-agent computation offloading problem, PDPPnet minimizes the expected long-term latency and task drop ratio while respecting task priorities. The architecture supports both peer-to-peer (P2P) and peer-to-Cloud (P2C) offloading, ensuring seamless task flow across the CEC. We integrated LSTM-predicted load dynamics with DDDQNs to enhance the long-term cost estimation, significantly improving decision-making efficacy. Simulation results demonstrate that PDPPnet markedly outperforms conventional offloading algorithms, reducing task drop ratios, average delay, and improving prioritized task throughput, thus optimizing the use of edge computational resources.
This work was supported by the European Union’s HORIZON research and innovation programme under grant agreement No 101070177.
Intelligent radio resource management is expected to play a vital role in addressing the strict user demands, possessing new challenges in the era of 6G services. In this paper, we provide an extensive study of artificial intelligence and machine learning (AI/ML) lifecycles, within the open radio access network (O-RAN) framework with particular emphasis on radio resource management advancement. More specifically, considering a multi-layered 6G network system, the AI/ML Cross-Layer Platform (AI-CLatform) is introduced to leverage AI/ML lifecycles devoted for O-RAN operation. With the near real-time RAN intelligent controller (Near-Rt RIC) devoted for radio intelligence, special emphasis is given on the development, deployment, optimization, and continuous monitoring of ML models within O-RAN, whereas the interactions between O-RAN and AI-CLatform are justified. To concretely illustrate the proposed end-to-end AI/ML sequential process, we present a proof-of-concept (PoC) practical scenario focusing on intelligent beamforming optimization for proactively managing the interference using recurrent neural networks (RNNs). The quantitative simulation findings prove the potential of the proposed AI/ML framework in enhancing critical 6G network functions within the O-RAN paradigm.
This work was partially supported by the 6G-CLOUD project, funded by EU HORIZON-JU-SNS-2023 program, under grant agreement No 101139073 (www.6g-cloud.eu/).
In the burgeoning domain of the edge-cloud con-tinuum (ECC), the efficient management of computational tasks offloaded from mobile devices to edge nodes is paramount. This paper introduces a Cooperative cOmputation Offloading scheme for ECC via Latency-aware multi-agent Reinforcement learning (COOLER), a distributed framework designed to address the challenges posed by the uncertain load dynamics at edge nodes. COOLER enables each edge node to autonomously make offloading decisions, optimizing for non-divisible, delay-sensitive tasks without prior knowledge of other nodes‘ task models and decisions. By formulating a multi-agent computation offloading problem, COOLER aims to minimize the expected long-term latency and task drop ratio. Following the ECC requirements for seamless task flow both within Edge layer and between Edge-Cloud layers, COOLER considers that task computation decisions are three-fold: (i) local computation, (ii) horizontal offloading to another edge node, or (iii) vertical offloading to the Cloud. The integration of advanced techniques such as long short-term memory (LSTM), double deep Q-network (DQN) and dueling DQN enhances the estimation of long-term costs, thereby improving decision-making efficacy. Simulation results demonstrate that COOLER significantly outperforms baseline offloading algorithms, reducing both the ratio of dropped tasks and average delay, and better harnessing the processing capacities of edge nodes.
This work was supported by the Project funded from the European Union’s HORIZON research and innovation programme, under grant agreement No 101070177.
In the last couple of decades, enterprises have relished and leveraged the capacity, performance, scalability, and quality of cloud computing services. However, a few years ago, the edge computing concept enabled data processing and application execution near compute and data resources, aiming to reduce latency and promote higher security and sovereignty regarding data transfers. The simultaneous use of these two service models by enterprises has lately resulted in the concept of edge-to-cloud continuum, which combines edge and cloud technologies and promotes standards and algorithms for resource orchestration, data management, and the deployment of solutions across the connected edge and cloud resources. Taking a step further in this direction, this paper introduces a framework to realise the Cognitive Computing Continuum (CCC). The framework leverages AI techniques to address the needs for optimal (edge and cloud) resource management and dynamic scaling, elasticity, and portability of hyper-distributed data-intensive applications. The proposed ENACT framework enables the automated management of distributed (edge and cloud) resources and the development of hyper-distributed applications that can take advantage of distributed deployment and execution opportunities to optimize their behaviors in terms of execution time, resource utilisation and energy efficiency.
This work presented in this paper is part of the ENACT project that has received funding from the European Union’s HORIZON Europe research innovation action programme, under grant agreement No 101135423.
This paper addresses the application of neural networks in resource constrained edge-devices. The goal is to achieve a speedup both in inference and training time, with minimal accuracy loss. More specifically, it brings to light the need for compressing current models, which are mostly developed with access to more resources that the device that the model will potential run on. With the recent advances of Internet of Things(IoT) the number of devices has and is expected to rise. Not only are these devices computationally limited, but their capabilities are nor homogeneous nor predictable at the time of the development of a model, as new devices can be added anytime. This creates the need to quickly and efficiently produce models that fit each devices specifications. Transfer learning is a very efficient method, in terms of training time, but confines the user to the dimensionality of the pretrained model. Pruning is used as a way to overcome this obstacle and carry over knowledge to a variety of model, that differ in size. The aim of this paper is to serve as an introduction to pruning as a concept, as a template for further research, quantify the efficiency of a variety of methods and expose some of it’s limitations. Pruning was performed on a telecommunications anomaly dataset and the results were compared to a baseline, in regards to speed and accuracy.
This work was partially support by project funded by EU HORIZON Europe programme, under grant agreement No 101093006.
Funded by the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.
