Session D-1

5G

Conference
2:00 PM — 3:30 PM EDT
Local
May 11 Tue, 2:00 PM — 3:30 PM EDT

Energy-Efficient Orchestration of Metro-Scale 5G Radio Access Networks

Rajkarn Singh (University of Edinburgh, United Kingdom (Great Britain)); Cengis Hasan (University of Luxembourg & Interdisciplinary Centre for Security, Reliability and Trust (SNT), Luxembourg); Xenofon Foukas (Microsoft Research, United Kingdom (Great Britain)); Marco Fiore (IMDEA Networks Institute, Spain); Mahesh K Marina (The University of Edinburgh, United Kingdom (Great Britain)); Yue Wang (Samsung Electronics, USA)

0
RAN energy consumption is a major OPEX source for mobile telecom operators, and 5G is expected to increase these costs by several folds. Moreover, paradigm-shifting aspects of the 5G RAN architecture like RAN disaggregation, virtualization and cloudification introduce new traffic-dependent resource management decisions that make the problem of energy-efficient 5G RAN orchestration harder. To address such a challenge, we present a first comprehensive virtualized RAN (vRAN) system model aligned with 5G RAN specifications, which embeds realistic and dynamic models for computational load and energy consumption costs. We then formulate the vRAN energy consumption optimization as an integer quadratic programming problem, whose NP-hard nature leads us to develop GreenRAN, a novel, computationally efficient and distributed solution that leverages Lagrangian decomposition and simulated annealing. Evaluations with real-world mobile traffic data for a large metropolitan area are another novel aspect of this work, and show that our approach yields energy efficiency gains up to 25% and 42%, over state-of-the-art and baseline traditional RAN approaches, respectively. I. Introduction The telecommunication industry currently consumes 2-3% of global energy and energy consumption constitutes 20-40% of the operating expenditure (OPEX) for mobile network operators [1]. As we head to 5G, the energy consumption is expected to increase further by 2-3 times due to the infrastructure growth needed to cope with the mobile data traffic surge [2], [3]. Over 90% of the operators have expressed concerns about the rise in energy costs [4]. Base stations (BSs) and consequently the radio access network (RAN) have traditionally been the source of major energy consumption in cellular networks [5]. This is expected to be the case also in 5G systems [1]. Developing RAN solutions that achieve high energy efficiency is thus crucial towards 5G sustainability.

mCore: Achieving Sub-millisecond Scheduling for 5G MU-MIMO Systems

Yongce Chen, Yubo Wu, Thomas Hou and Wenjing Lou (Virginia Tech, USA)

0
MU-MIMO technology enables a base station (BS) to transmit signals to multiple users simultaneously on the same frequency band. It is a key technology for 5G NR to increase the data rate. In 5G specifications, an MU-MIMO scheduler needs to determine RBs allocation and MCS assignment to each user for each TTI. Under MU-MIMO, multiple users may be coscheduled on the same RB and each user may have multiple data streams simultaneously. In addition, the scheduler must meet the stringent real-time requirement (∼1 ms) during decision making to be useful. This paper presents mCore, a novel 5G scheduler that can achieve ∼1 ms scheduling with joint optimization of RB allocation and MCS assignment to MU-MIMO users. The key idea of mCore is to perform a multi-phase optimization, leveraging large-scale parallel computation. In each phase, mCore either decomposes the optimization problem into a number of independent sub-problems, or reduces the search space into a smaller but most promising subspace, or both. We implement mCore on a commercial-off-the-shelf GPU. Experimental results show that mCore can offer the best scheduling performance for up to 100 RBs, 100 users, 29 MCS levels and 4 × 12 antennas when compared to other state-of-the-art algorithms. It is also the only algorithm that can find its scheduling solution in ∼1 ms.

SteaLTE: Private 5G Cellular Connectivity as a Service with Full-stack Wireless Steganography

Leonardo Bonati, Salvatore D'Oro, Francesco Restuccia, Stefano Basagni and Tommaso Melodia (Northeastern University, USA)

1
Fifth-generation (5G) systems will extensively employ radio access network (RAN) softwarization. This key innovation enables the instantiation of "virtual cellular networks" running on different slices of the shared physical infrastructure. In this paper, we propose the concept of Private Cellular Connectivity as a Service (PCCaaS), where infrastructure providers deploy covert network slices known only to a subset of users. We then present SteaLTE as the first realization of a PCCaaS-enabling system for cellular networks. At its core, SteaLTE utilizes wireless steganography to disguise data as noise to adversarial receivers. Differently from previous work, however, it takes a full-stack approach to steganography, contributing an LTE-compliant steganographic protocol stack for PCCaaS-based communications, and packet schedulers and operations to embed covert data streams on top of traditional cellular traffic (primary traffic). SteaLTE balances undetectability and performance by mimicking channel impairments so that covert data waveforms are almost indistinguishable from noise. We evaluate the performance of SteaLTE on an indoor LTE-compliant testbed under different traffic profiles, distance and mobility patterns. We further test it on the outdoor PAWR POWDER platform over long-range cellular links. Results show that in most experiments SteaLTE imposes little loss of primary traffic throughput in presence of covert data transmissions (< 6%), making it suitable for undetectable PCCaaS networking.

Store Edge Networked Data (SEND): A Data and Performance Driven Edge Storage Framework

Adrian-Cristian Nicolaescu (University College London (UCL), United Kingdom (Great Britain)); Spyridon Mastorakis (University of Nebraska, Omaha, USA); Ioannis Psaras (Protocol Labs & University College London, United Kingdom (Great Britain))

0
The number of devices that the edge of the Internet accommodates and the volume of the data these devices generate are expected to grow dramatically in the years to come. As a result, managing and processing such massive data amounts at the edge becomes a vital issue. This paper proposes "Store Edge Networked Data" (SEND), a novel framework for in-network storage management realized through data repositories deployed at the network edge. SEND considers different criteria (e.g., data popularity, data proximity from processing functions at the edge) to intelligently place different categories of raw and processed data at the edge based on system-wide identifiers of the data context, called labels. We implement a data repository prototype on top of the Google file system, which we evaluate based on real-world datasets of images and Internet of Things device measurements. To scale up our experiments, we perform a network simulation study based on synthetic and real-world datasets evaluating the performance and trade-offs of the SEND design as a whole. Our results demonstrate that SEND achieves data insertion times of 0.06ms-0.9ms, data lookup times of 0.5ms-5.3ms, and on-time completion of up to 92% of user requests for the retrieval of raw and processed data.

Session Chair

Christopher Brinton (Purdue University)

Watch live stream Enter Zoom
Session D-2

5G and beyond

Conference
4:00 PM — 5:30 PM EDT
Local
May 11 Tue, 4:00 PM — 5:30 PM EDT

A Deep-Learning-based Link Adaptation Design for eMBB/URLLC Multiplexing in 5G NR

Yan Huang (Nvidia, USA); Thomas Hou and Wenjing Lou (Virginia Tech, USA)

0
URLLC is an important use case in 5G NR that targets at 1-ms level delay-sensitive applications. For fast transmission of URLLC traffic, a promising mechanism is to multiplex URLLC traffic into a channel occupied by eMBB service through preemptive puncturing. Although preemptive puncturing can offer transmission resource to URLLC on demand, it will adversely affect throughput and link reliability performance of eMBB service. To mitigate such an adverse impact, a possible approach is to employ link adaptation (LA) through MCS selection for eMBB users. In this paper, we study the problem of maximizing eMBB throughput through MCS selection while ensuring link reliability requirement for eMBB users. We present DELUXE - the first successful design and implementation based on deep learning to address this problem. DELUXE involves a novel mapping method to compress high-dimensional eMBB transmission information into a low-dimensional representation with minimal information loss, a learning method to learn and predict the block-error rate (BLER) under each MCS, and a fast calibration method to compensate errors in BLER predictions. For proof of concept, we implement DELUXE through a linklevel 5G NR simulator. Extensive experimental results show that DELUXE can successfully maintain the desired link reliability for eMBB while striving for spectral efficiency. In addition, our implementation can meet the real-time requirement (< 125 µs) in 5G NR.

Reusing Backup Batteries as BESS for Power Demand Reshaping in 5G and Beyond

Guoming Tang (Peng Cheng Laboratory, China); Hao Yuan and Deke Guo (National University of Defense Technology, China); Kui Wu (University of Victoria, Canada); Yi Wang (Southern University of Science and Technology, China)

1
The mobile network operators are upgrading their network facilities and shifting to the 5G era at an unprecedented pace. The huge operating expense (OPEX), mainly the energy consumption cost, has become the major concern of the operators. In this work, we investigate the energy cost-saving potential by transforming the backup batteries of base stations (BSs) to a distributed battery energy storage system (BESS). Specifically, to minimize the total energy cost, we model the distributed BESS discharge/charge scheduling as an optimization problem by incorporating comprehensive practical considerations. Then, considering the dynamic BS power demands in practice, we propose a deep reinforcement learning (DRL) based approach to make BESS scheduling decisions in real-time. The experiments using real-world BS deployment and traffic load data demonstrate that with our DRL-based BESS scheduling, the peak power demand charge of BSs can be reduced by up to 26.59%, and the yearly OPEX saving for 2, 282 5G BSs could reach up to US$185, 000.

The Impact of Baseband Functional Splits on Resource Allocation in 5G Radio Access Networks

Iordanis Koutsopoulos (Athens University of Economics and Business, Greece)

0
We study physical-layer (PHY) baseband functional split policies in 5G Centralized Radio-Access-Network (C-RAN) architectures that include a central location, the baseband unit (BBU) with some BBU servers, and a set of Base Stations (BSs), the remote radio heads (RRHs), each with a RRH server. Each RRH is connected to the BBU location through a fronthaul link. We consider a scenario with many frame streams at the BBU location, where each stream needs to be processed by a BBU server before being sent to a remote radio-head (RRH). For each stream, a functional split needs to be selected, which provides a way of partitioning the computational load of the baseband processing chain for stream frames between the BBU and RRH servers. For streams that are served by the same BBU server, a scheduling policy is also needed. We formulate and solve the joint resource allocation problem of functional split selection, BBU server allocation and server scheduling, with the goal to minimize total average end-to-end delay or to minimize maximum average delay over RRH streams. The total average end-to-end delay is the sum of (i) scheduling (queueing) and processing delay at the BBU servers, (ii) data transport delay at the fronthaul link, and (iii) processing delay at the RRH server. Numerical results show the resulting delay improvements, if we incorporate functional split selection in resource allocation.

Optimal Resource Allocation for Statistical QoS Provisioning in Supporting mURLLC Over FBC-Driven 6G Terahertz Wireless Nano-Networks

Xi Zhang and Jingqing Wang (Texas A&M University, USA); H. Vincent Poor (Princeton University, USA)

0
The new and important service class of massive Ultra-Reliable Low-Latency Communications (mURLLC) is defined in the 6G era to guarantee very stringent quality-of-service (QoS) requirements, such as ultra-high data-rate, super-high reliability, tightly-bounded end-to-end latency, etc. Various 6G promising techniques, such as finite blocklength coding (FBC) and Terahertz (THz), have been proposed to significantly improve QoS performances of mURLLC. Furthermore, with the rapid developments in nano techniques, THz wireless nano-networks have drawn great research attention due to its ability to support ultra-high data-rate while addressing the spectrum scarcity and capacity limitations problems. However, how to efficiently integrate THz-band nano communications with FBC in supporting statistical delay/error-rate bounded QoS provisioning for mURLLC still remains as an open challenge over 6G THz wireless nano-networks. To overcome these problems, in this paper we propose the THz-band statistical delay/error-rate bounded QoS provisioning schemes in supporting mURLLC standards by optimizing both the transmit power and blocklength over 6G THz wireless nano-networks in the finite blocklength regime. Specifically, first, we develop the FBCdriven THz-band wireless channel models in nano-scale. Second, we build up the THz-band interference model and derive the channel capacity and channel dispersion functions using FBC. Third, we maximize the ɛ-effective capacity by developing the joint optimal resource allocation policies under statistical delay/error-rate bounded QoS constraints. Finally, we conduct the extensive simulations to validate and evaluate our proposed schemes at the THz-band in the finite blocklength regime.

Session Chair

Mehmet Can Vuran (U. Nebraska, Lincoln)

Watch live stream Enter Zoom

Made with in Toronto · Privacy Policy · INFOCOM 2020 · © 2021 Duetone Corp.