Session G-1

Theory 1

11:00 AM — 12:30 PM EDT
May 17 Wed, 11:00 AM — 12:30 PM EDT
Babbio 221

SeedTree: A Dynamically Optimal and Local Self-Adjusting Tree

Arash Pourdamghani (TU Berlin, Germany); Chen Avin (Ben-Gurion University of the Negev, Israel); Robert Sama and Stefan Schmid (University of Vienna, Austria)

We consider the fundamental problem of designing a self-adjusting tree, which efficiently and locally adapts itself towards the demand it serves (namely accesses to the items stored by the tree nodes), striking a balance between the benefits of such adjustments (enabling faster access) and their costs (reconfigurations). This problem finds applications, among others, in the context of emerging demand-aware and reconfigurable datacenter networks and features connections to self-adjusting data structures. Our main contribution is SeedTree, a dynamically optimal self-adjusting tree which supports local (i.e., greedy) routing, which is particularly attractive under highly dynamic demands. SeedTree relies on an innovative approach which defines a set of unique paths based on randomized item addresses, and uses a small constant number of items per node. We complement our analytical results by showing the benefits of SeedTree empirically, evaluating it on various synthetic and real-world communication traces.
Speaker Chen Avin

Chen Avin is a Professor at the School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Israel. He received his MSc and Ph.D. in computer science from the University of California, Los Angeles (UCLA) in 2003 and 2006. Recently he served as the chair of the Communication Systems Engineering department at BGU. His current research interests are data-driven graphs and network algorithms, modeling, and analysis, emphasizing demand-aware networks, distributed systems, social networks, and randomized algorithms for networking.

Self-Adjusting Partially Ordered Lists

Vamsi Addanki (TU Berlin, Germany); Maciej Pacut (Technical University of Berlin, Germany); Arash Pourdamghani (TU Berlin, Germany); Gábor Rétvári (Budapest University of Technology and Economics, Hungary); Stefan Schmid and Juan Vanerio (University of Vienna, Austria)

We introduce self-adjusting partially-ordered lists, a generalization of self-adjusting lists where additionally there may be constraints for the relative order of some nodes in the list. The lists self-adjust to improve performance while serving input sequences exhibiting favorable properties, such as locality of reference, but the constraints must be respected. We design a deterministic adjusting algorithm that operates without any assumptions about the input distribution, without maintaining frequency statistics or timestamps. Although the partial order limits the effectiveness of self-adjustments, the deterministic algorithm performs closely to optimum (it is 4-competitive). In addition, we design a family of randomized algorithms with improved competitive ratios, handling also the rearrangement cost scaled by an arbitrary constant d ≥ 1. Moreover, we observe that different constraints influence the competitiveness of online algorithms, and we shed light on this aspect with a lower bound. We investigate the applicability of our lists in the context of network packet classification. Our evaluations show that our classifier performs similarly to a static list for low-locality traffic, but significantly outperforms Efficuts (by factor 7x), CutSplit (3.6x) and the static list (14x) for high locality and small rulesets.
Speaker Arash Pourdamghani (TU Berlin)

Arash Pourdamghani is a direct Ph.D. student at the INET group at the Technical University of Berlin, Germany. Previously he was a researcher at the University of Vienna and completed research internships at IST Austria and CUHK. He got his B.Sc. from the Sharif University of Technology. He is interested in algorithm design and analysis with applications in networks, distributed systems, and blockchains. His particular focus is on self-adjusting networks.

Online Dynamic Acknowledgement with Learned Predictions

Sungjin Im (University of California at Merced, USA); Benjamin Moseley (Carnegie Mellon University, USA); Chenyang Xu (East China Normal University, China); Ruilong Zhang (City University of Hong Kong, Hong Kong)

We revisit the online dynamic acknowledgment problem. In the problem, a sequence of requests arrive over time to be acknowledged, and all outstanding requests can be satisfied simultaneously by one acknowledgement. The goal of the problem is to minimize the total request delay plus acknowledgement cost. This elegant model studies the trade-off between acknowledgement cost and waiting experienced by requests. The problem has been well studied and the tight competitive ratios have been determined. For this well-studied problem, we focus on how to effectively use machine-learned predictions to have better performance.

We develop algorithms that perform arbitrarily close to the optimum with accurate predictions while concurrently having the guarantees arbitrarily close to what the best online algorithms can offer without access to predictions, thereby achieving simultaneous optimum consistency and robustness. This new result is enabled by our novel prediction error measure. No error measure was defined for the problem prior to our work, and natural measures failed due to the challenge that requests with different arrival times have different effects on the objective. We hope our ideas can be used for other online problems with temporal aspects that have been resisting proper error measures.
Speaker Chenyang Xu (East China Normal University)

Chenyang Xu is now an assistant professor in East China Normal University. His research interests are broadly in operations research and theoretical computer science. His recent work mainly focuses on making use of machine learned predictions to design robust algorithms for combinatorial optimization problems, and some fair allocation topics. 

LOPO: An Out-of-order Layer Pulling Orchestration Strategy for Fast Microservice Startup

Lin Gu and Junhao Huang (Huazhong University of Science and Technology, China); Shaoxing Huang (Huazhong University of Science and Technology & HUST, China); Deze Zeng (China University of Geosciences, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong); Hai Jin (Huazhong University of Science and Technology, China)

Container based microservices have been widely applied to promote the cloud elasticity. The mainstream Docker containers are structured in layers, which are organized in stack with bottom-up dependency. To start a microservice, the required layers are pulled from a remote registry and stored on its host server, following the layer dependency. This incurs long microservice startup time and hinders the performance efficiency. In this paper, we discover that, for the first time, the layer pulling order can be adjusted to accelerate the microservice startup. Specifically, we address the problem on microservice layer pulling orchestration for startup time minimization and prove it as NP-hard. We propose a Longest-chain based Out-of-order layer Pulling Orchestration (LOPO) strategy with low computational complexity and guaranteed approximation ratio. Through extensive real-world trace driven experiments, we verify the efficiency of our LOPO and demonstrate that it reduces the microservice startup time by 22.71% on average in comparison with state-of-the-art solutions.
Speaker Junhao Huang (Huazhong University of Science and Technology)

Junhao Huang received the B.S. degrees from the School of Computer Science and Engineering, Northeastern University, Shenyang, China, in 2020. He is currently pursuing the M.S. degree in the School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China . His current research interests mainly focus on cloud computing, and edge computing.

Session Chair

Xiaowen Gong

Session G-2

Theory 2

2:00 PM — 3:30 PM EDT
May 17 Wed, 2:00 PM — 3:30 PM EDT
Babbio 221

One Pass is Sufficient: A Solver for Minimizing Data Delivery Time over Time-varying Networks

Peng Wang (Xidian University, China); Suman Sourav (Singapore University of Technology and Design, Singapore); Hongyan Li (Xidian University, China); Binbin Chen (Singapore University of Technology and Design, Singapore)

How to allocate network paths and their resources to minimize the delivery time of data transfer tasks over time-varying networks? Solving this MDDT (Minimizing Data Delivery Time) problem has important applications from data centers to delay-tolerant networking. In particular, with the rapid deployment of satellite networks in recent years, an efficient MDDT solver will serve as a key building block there. The MDDT problem can be solved in polynomial time by finding maximum flows in a time-expanded graph. A binary-search-based solver (SIGCOMM'11) incurs O(Nlog(N)u(|V*|+|E|)) time complexity, where N corresponds to time horizon, u is the data transfer volume, |V*| is the number of nodes, and |E| is the average number of edges over time. In this work, we design a one-pass solver that progressively expands the graph over time until it reaches the earliest time interval n to complete the delivery. By reusing the calculated max flow results from earlier iterations, it solves the MDDT problem while incurring only O(nu(|V*|+|E|)) time complexity. We evaluate our solver using a network of 184 satellites from Starlink constellations. We demonstrate >75 times speed-up in the computational time and show that our solution can enable advanced applications such as pre-emptive scheduling.
Speaker Peng Wang (Xidian University)

Peng is currently a Ph.D. student in Xidian University in Communication and Information System under the guidance of Prof. Hongyan Li (2018.9 ~ now) . He obtained his Bachelor from Xidian University in Telecommunications Engineering (2013.8~2017.6). He is also a Visiting Ph.D. student in Singapore University of Technology and Design under the guidance of Prof. Binbin Chen (2021.5~2022.11).

Peng's research mainly focuses on using graph theory, combination optimization and other mathematical tools to help model time-varying resources, analyze network capacity under different resource/QoS constraints and design delay-guaranteed routing and scheduling algorithms over the satellite and 5G terrestrial networks.

Neural Constrained Combinatorial Bandits

Shangshang Wang, Simeng Bian, Xin Liu and Ziyu Shao (ShanghaiTech University, China)

Constrained combinatorial contextual bandits have emerged as trending tools in intelligent systems and networks to model reward and cost signals under combinatorial decision-making. On one hand, both signals are complex functions of the context, e.g., in federated learning, training loss (negative reward) and energy consumption (cost) are nonlinear functions of edge devices' system conditions (context). On the other hand, there are cumulative constraints on costs, e.g., the accumulated energy consumption should be budgeted by energy resources. Besides, real-time systems often require such constraints to be guaranteed anytime or in each round, e.g., ensuring anytime fairness for task assignment to maintain the credibility of crowdsourcing platforms for workers. This setting imposes a challenge on how to simultaneously achieve reward maximization while subjecting to anytime cumulative constraints. To address such a challenge, we propose a primal-dual algorithm (Neural-PD) whose primal component adopts multi-layer perceptrons to estimate reward and cost functions, and its dual component estimates the Lagrange multiplier with the virtual queue. By integrating neural tangent kernel theory and Lyapunov-drift techniques, we prove Neural-PD achieves a sharp regret bound and a zero constraint violation. We also show Neural-PD outperforms existing algorithms with extensive experiments on both synthetic and real-world datasets.
Speaker Shangshang Wang (ShanghaiTech University)

Shangshang Wang is currently a Master student in ShanghaiTech University under the guidance of Prof. Ziyu Shao in the Laboratory for Intelligence Information and Decision (2021 ~ now, majored in Computer Science). He obtained his Bachelor from ShanghaiTech University in Computer Science (2017 ~ 2021).

Variance-Adaptive Algorithm for Probabilistic Maximum Coverage Bandits with General Feedback

Xutong Liu (The Chinese University of Hong Kong, Hong Kong); Jinhang Zuo (Carnegie Mellon University, USA); Hong Xie (Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, China); Carlee Joe-Wong (Carnegie Mellon University, USA); John C.S. Lui (The Chinese University of Hong Kong, Hong Kong)

Probabilistic maximum coverage (PMC) is an important problem that can model many network applications, including mobile crowdsensing, network content delivery, and dynamic channel allocation, where an operator chooses nodes in a graph that can probabilistically cover other nodes. In this paper, we study PMC under the online learning context: the PMC bandit. For PMC bandit where network parameters are not known a priori, the decision maker needs to learn the unknown parameters and the goal is to maximize the total rewards from the covered nodes. Though PMC bandit has been studied previously, the existing model and its corresponding algorithm can be significantly improved. First, we propose the PMC-G bandit whose feedback model generalizes existing semi-bandit feedback, allowing PMC bandit to model applications like online content delivery and online dynamic channel allocation. Next, we improve the existing combinatorial upper confidence bound (CUCB) algorithm by introducing the variance-adaptive algorithm, i.e., the VA-CUCB algorithm. We prove that VA-CUCB can achieve strictly better regret bounds, which improves CUCB by a factor of \tilde{O}(K), where $K$ is the number of nodes selected in each round. Finally, experiments show our superior performance compared with benchmark algorithms on synthetic and real-world datasets.
Speaker Jinhang Zuo (UMass Amherst & Caltech)

Jinhang Zuo is a joint postdoc at UMass Amherst and Caltech. He received his Ph.D. in ECE from CMU in 2022. His main research interests include online learning, resource allocation, and networked systems. He was a recipient of the CDS Postdoctoral Fellowship from UMass Amherst, Qualcomm Innovation Fellowship Finalist, AAAI-20 Student Scholarship, and Carnegie Institute of Technology Dean’s Fellowship.

Lock-based or Lock-less: Which Is Fresh?

Vishakha Ramani (Rutgers University, USA); Jiachen Chen (WINLAB, Rutgers University, USA); Roy Yates (Rutgers University, USA)

We examine status updating systems in which time-stamped status updates are stored/written in shared-memory. Specifically, we compare Read-Copy-Update (RCU) and Readers-Writer lock (RWL) as shared-memory synchronization primitives on the update freshness. To demonstrate the tension between readers and writers accessing shared-memory, we consider a network scenario with a pair of coupled updating processes. Location updates of a mobile terminal are written to a shared-memory Forwarder Information Base (FIB) at a network forwarder. An application server sends ``app updates'' to the mobile terminal via the forwarder. Arriving app updates at forwarder are addressed (by reading the FIB) and forwarded to the mobile terminal. If a FIB read returns an outdated address, the misaddressed app update is lost in transit. We redesign these reader and writer processes using preemption mechanisms that improve the timeliness of updates. We present a Stochastic Hybrid System (SHS) framework to analyze location and app update age processes and show how these two age processes are coupled through synchronization primitives. Our analysis shows that using a lock-based primitive (RWL) can serve fresher app updates to the mobile terminal at higher location update rates while lock-less (RCU) mechanism favors timely delivery of app updates at lower location update rates.
Speaker Vishakha Ramani (Rutgers University)

Vishakha Ramani is a doctoral candidate at Rutgers University, where she is affiliated with the Wireless Information Networks Laboratory (WINLAB) and the Department of Electrical and Computer Engineering (ECE). In 2020, she earned a Master of Science degree from the ECE department at Rutgers University. Her research focuses on developing, analyzing, and designing algorithms for real-time networked systems, with a particular emphasis on using the Age-of-Information (AoI) as a performance metric of interest.

Session Chair

Yusheng Ji

Session G-3


4:00 PM — 5:30 PM EDT
May 17 Wed, 4:00 PM — 5:30 PM EDT
Babbio 221

Securing 5G OpenRAN with a Scalable Authorization Framework for xApps

Tolga O Atalay and Sudip Maitra (Virginia Tech, USA); Dragoslav Stojadinovic (Kryptowire LLC, USA); Angelos Stavrou (Virginia Tech & Kryptowire, USA); Haining Wang (Virginia Tech, USA)

The ongoing transformation of mobile networks from proprietary physical network boxes to virtualized functions and deployment models has led to more scalable and flexible network architectures capable of adapting to specific use cases. As an enabler of this movement, the OpenRAN initiative promotes standardization allowing for a vendor-neutral radio access network with open APIs. Moreover, the O-RAN Alliance has begun specification efforts conforming to OpenRAN's definitions. This includes the near-real-time RAN Intelligent Controller (RIC) overseeing a group of extensible applications (xApps). The use of these potentially untrusted third-party applications introduces a new attack surface to the mobile network plane with fundamental security and system design requirements that are yet to be addressed. To secure the 5G O-RAN xApp model, we introduce the xApp Repository Function (XRF) framework, which implements scalable authentication, authorization, and discovery for xApps. We first present the framework's system design and implementation details, followed by operational benchmarks in a production-grade containerized environment. The evaluation results, centered on active processing and operation times, show that our proposed framework can scale efficiently in a multi-threaded Kubernetes microservice environment and support a large number of clients with minimal overhead.
Speaker Tolga Atalay (Virginia Tech)

Tolga is a PhD Student at the Bradley Department of Electrical and Computer Engineering at Virginia Tech. His work revolves around the system design and implementation of robust and scalable cybersecurity platforms for 5G/beyond networks.

A Close Look at 5G in the Wild: Unrealized Potentials and Implications

Yanbing Liu and Chunyi Peng (Purdue University, USA)

This paper reports our in-depth measurement study of 5G experience with three US operators (AT&T, Verizon and T-Mobile). We not only characterize 5G coverage, availability and performance (over both mmWave and Sub-6GHz bands), but also identify several performance issues and analyze their root causes. We see that real 5G experience is not that satisfactory as anticipated. It is mainly because faster 5G is not used as it can and should. We have several astonishing findings: Despite huge speed potentials (say, up to several hundreds of Mbps), more than half are not realized in practice; Such underutilization is mainly rooted in current practice and policies that manage radio resource in a performance-oblivious manner; 5G over mmWave and Sub-6GHz bands hurt each other, so that doing more gets less (more 5G underutilzation in AT&T and Verizon because they support both but T-Mobile supports one); Transiently missing 5G is not uncommon and its negative impacts last much longer. Inspired by our findings, we design a patch solution called 5GBoost. Our preliminary evaluation validates its effectiveness to realize more 5G potentials.
Speaker Yanbing Liu (Purdue University)

Yanbing is a Ph.D. student in the Department of Computer Science at Purdue University. He is supervised by Prof. Chunyi Peng. His research interests are in the area of mobile networking, with a focus on 5G networking measurement and design.

Spotlight on 5G: Performance, Device Evolution and Challenges from a Mobile Operator Perspective

Paniz Parastar (University of Oslo, Norway); Andra Lutu (Telefónica Research, Spain); Ozgu Alay (University of Oslo & Simula Metropolitan, Norway); Giuseppe Caso (Ericsson Research, Sweden); Diego Perino (Meta, Spain)

Fifth Generation (5G) has been acknowledged as a significant shift in cellular networks, expected to run significantly different classes of services and do so with outstanding performance in terms of low latency, high capacity, and extreme reliability. Managing the resulting complexity of mobile network architectures will depend on making efficient decisions at all network levels based on end-user requirements. However, to achieve this, it is critical to first understand the current mobile ecosystem and capture the device heterogeneity, which is one of the major challenges for ensuring the successful exploitation of 5G technologies.

In this paper, we conduct a large-scale measurement study of a commercial mobile operator in the UK, focusing on bringing forward a real-world view on the available network resources, as well as how more than 30M end-user devices utilize the mobile network. We focus on the current status of the 5G Non-Standalone (NSA) deployment and the network-level performance and show how it caters to the prominent use cases that 5G promises to support. Finally, we demonstrate that a fine-granular set of requirements is, in fact, necessary to orchestrate the service to the diverse groups of 5G devices, some of which operate in permanent roaming.
Speaker Paniz Parastar

Paniz Parastar is a PhD candidate in the Department of Informatics at the University of Oslo, Norway. She is a passionate networking researcher with expertise in network data analysis and optimization. During her PhD, she has been analyzing the real-world network to gain insights into the new use cases emerging in the 5G era. Currently, her focus is on connected cars, where she is investigating the requirements for deploying edge servers to enhance their performance.

Your Locations May Be Lies: Selective-PRS-Spoofing Attacks and Defence on 5G NR Positioning Systems

Kaixuan Gao, Wang Huiqiang and Hongwu Lv (Harbin Engineering University, China); Pengfei Gao (China Unicom Heilongjiang Branch, China)

5G positioning systems, as a solution for city-range integrated-sensing-and-communication (ISAC), are flooding into reality. However, the positioning security aspects of such an ISAC system have been overlooked, bringing threats to more than a billion 5G users. In this paper, we propose a new threat model for 5G positioning scenarios, namely the selective-PRS-spoofing attack (SPS), disabling the latest security enhancement method reported in 3GPP R18. In our attack pattern, the attacker first cracks the broadcast information of a 5G network and then poisons specific resource elements of the channel, which can introduce substantial localization errors at victims or even completely control the positioning results. Worse, such attacks are transparent to both UE-side and network-side due to their stealth and easily bypass the current 3GPP defence mechanism. To solve this problem, a DL-based defence method called in-phase quadrature Network (IQ-Net) is proposed, which utilizes the hardware features of base stations to perform identification at the physical level, thereby thwarting SPS attacks on 5G positioning systems. Extensive experiments demonstrate that our method has 98% defence accuracy and good robustness to noise.
Speaker Kaixuan Gao (Harbin Engineering University)

Kaixuan Gao received his B.E. degree in Computer Science and Technology in 2018. He is currently pursuing a PhD degree at Harbin Engineering University (HEU). His current research interests include high-precision localization, integrated sensing and communication (ISAC), AI, and future XG networks.

Session Chair

Kaushik Chowdhury

Gold Sponsor

Gold Sponsor

Bronze Sponsor

Student Travel Grants

Student Travel Grants

Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.