Session A-4

A-4: Software Defined Networking and Virtualization

Conference
8:30 AM — 10:00 AM PDT
Local
May 22 Wed, 11:30 AM — 1:00 PM EDT
Location
Regency A

YinYangRAN: Resource Multiplexing in GPU-Accelerated Virtualized RANs

Leonardo Lo Schiavo (Universidad Carlos III de Madrid & IMDEA Networks Institute, Spain); Jose A. Ayala-Romero (NEC Laboratories Europe GmbH, Germany); Andres Garcia-Saavedra (NEC Labs Europe, Germany); Marco Fiore (IMDEA Networks Institute, Spain); Xavier Costa-Perez (ICREA and i2cat & NEC Laboratories Europe, Spain)

0
RAN virtualization is revolutionizing the telco industry, enabling 5G Distributed Units to run using general-purpose platforms equipped with Hardware Accelerators (HAs). Recently, GPUs have been proposed as HAs, hinging on their unique capability to execute 5G PHY operations efficiently while also processing Machine Learning (ML) workloads. While this ambivalence makes GPUs attractive for cost-effective deployments, we experimentally demonstrate that multiplexing 5G and ML workloads in GPUs is in fact challenging, and that using conventional GPU-sharing methods can severely disrupt 5G operations. We then introduce YinYangRAN, an innovative O-RAN-compliant solution that supervises GPU-based HAs so as to ensure reliability in the 5G processing pipeline while maximizing the throughput of concurrent ML services. YinYangRAN performs GPU resource allocation decisions via a computationally-efficient approximate dynamic programming technique, which is informed by a neural network trained on real-world measurements. Using workloads collected in real RANs, we demonstrate that YinYangRAN can achieve over 50% higher 5G processing reliability than conventional GPU sharing models with minimal impact on co-located ML workloads. To our knowledge, this is the first work identifying and addressing the complex problem of HA management in emerging GPU-accelerated vRANs, and represents a promising step towards multiplexing PHY and ML workloads in mobile networks.
Speaker
Speaker biography is not available.

A Lightweight Path Validation Scheme in Software-Defined Networks

Bing Hu and Yuanguo Bi (Northeastern University, China); Kui Wu (University of Victoria, Canada); Rao Fu (Northeastern University & No Company, China); Zixuan Huang (Northeastern University, China)

0
Software-Defined Networks (SDN) imbue traditional networks with unmatched agility and programmability by segregating the control and data planes. However, this separation enables adversaries to tamper with data plane forwarding behaviours, thereby violating control plane policies and overall security guidelines. In response, we propose a Lightweight Path Validation Scheme (L-PVS) tailored for SDN. Firstly, we put forth a streamlined packet forwarding path validation scheme that verifies the paths traversed by packets, alongside a theoretical analysis of this validation process. Subsequently, we amplify the scheme with a network flow path validation to boost the validation efficiency. To alleviate the storage load on switches during this flow path validation, we advocate for a storage optimization method that aligns switch storage overhead with network flows rather than individual packets. Furthermore, we formulate a path partition scheme and introduce a Greedy-based KeySwitch Node Selection Algorithm (GKSS) to pinpoint optimal switches for path partition, significantly reducing overall data plane storage usage. Lastly, we propose an anomaly switch identification method utilizing temporary KeySwitch nodes when unexpected forwarding behaviours emerge. Evaluation results verify that L-PVS facilitates path validation with a reduced validation header size, while also minimizing the impact on processing delay, and switch storage overhead.
Speaker
Speaker biography is not available.

CloudPlanner: Minimizing Upgrade Risk of Virtual Network Devices for Large-Scale Cloud Networks

Xin He, Enhuan Dong and Jiahai Yang (Tsinghua University, China); Shize Zhang (Alibaba Group, China); Zhiliang Wang (Tsinghua University, China); Zejie Wang (Alibaba Group, China); Ye Yang (Alibaba Cloud & Zhejiang University, China); Jun Zhou, Xiaoqing Sun, Enge Song, Jianyuan Lu and Biao Lyu (Alibaba Group, China); Shunmin Zhu (Tsinghua University and Alibaba Group, China)

0
Cloud networks continuously upgrade softwarized virtual network devices (VNDs) to meet evolving tenant demands. However, such upgrades may result in unexpected failures. An intuitive idea to prevent upgrade failures is to resolve all compatibility issues before deployment, but it is impractical to replicate all deployed VND cases and test them with lots of replayed real traffic for the VND developers. As a result, the operations team takes upgrade risk to test upgrades by gradually deploying them. Although careful upgrade schedule planning is the most common method to minimize upgrade risk, to the best of our knowledge, no VND upgrade schedule planning scheme has been adequately studied for large-scale cloud networks. To fill this gap, we propose CloudPlanner, the first VND upgrade schedule planning scheme aiming to minimize the VND upgrade risk for large-scale cloud networks. CloudPlanner prioritizes upgrading VNDs that are more likely to trigger failures based on expert knowledge and historical failure-trigger VND properties and limits the number of tenants associated with simultaneously upgraded VNDs. We also propose a heuristic solver which can quickly and greedily plan schedules. Using real-world data from production environments, we demonstrate the benefits of CloudPlanner through extensive experiments.
Speaker
Speaker biography is not available.

A Practical Near Optimal Deployment of Service Function Chains in Edge-to-Cloud Networks

Rasoul Behravesh (Fondazione Bruno Kessler, Italy); David Breitgand (IBM Research -- Haifa, Israel); Dean H Lorenz (IBM Research - Haifa, Israel); Danny Raz (Technion - Israel Institute of Technology & Google, Israel)

0
Mobile edge computing opens a plethora of opportunities to develop new applications, offering much better quality of experience to the users. A fundamental problem that has been thoroughly studied in this context is deployment of Service Function Chains (SFCs) into a physical network on the spectrum between edge and cloud. This problem is known to be NP-hard. Because of its practical importance high quality sub-optimal solutions are of great interest.

In this paper, we consider this well known problem and propose a novel near-optimal heuristic that is extremely efficient and scalable. We evaluate our solution to the state-of-the-art heuristics and fractional optimum. In our large scale evaluations, we use realistic topologies which are previously reported in the literature. We demonstrate that the execution time offered by our solution grows slowly as the the number of the Virtual Network Function (VNF) forwarding graph embedding requests grows, and it handles one million requests in slightly more than 30 seconds for 80 node topology.
Speaker
Speaker biography is not available.

Session Chair

Vaji Farhadi (Bucknell University, USA)

Enter Zoom
Session A-7

A-7: Consensus Protocols

Conference
3:30 PM — 5:00 PM PDT
Local
May 22 Wed, 6:30 PM — 8:00 PM EDT
Location
Regency A

Tolerating Disasters with Hierarchical Consensus

Wassim Yahyaoui (University of Luxembourg & SnTInterdisciplinary Centre for Security, Reliability and Trust (SnT), Luxembourg); Jeremie Decouchant (Delft University of Technology, The Netherlands); Marcus Völp (University of Luxembourg, Luxembourg); Joachim Bruneau-Queyreix (Bordeaux-INP, France)

0
Geo-replication provides disaster recovery after catastrophic accidental failures or attacks, such as fires, black outs or denial-of-service attacks to a data center or region. Naturally distributed data structures, such as Blockchains, when well designed, are immune against such disruptions, but they also benefit from leveraging locality. In this work, we consolidate the performance of geo-replicated consensus by leveraging novel insights about hierarchical consensus and a construction methodology that allows creating novel protocols from existing building blocks. In particular we show that cluster confirmation, paired with subgroup rotation, allows protocols to safely operate through situations where all members of the global consensus group are Byzantine. We demonstrate our compositional construction by combining the recent HotStuff and Damysus protocols into a hierarchical geo-replicated blockchain with global durability guarantees. We present a compositionality proof and demonstrate the correctness of our protocol, including its ability to tolerate cluster crashes. Our protocol achieves a 20\% higher throughput than GeoBFT, the latest hierarchical Byzantine Fault-Tolerant (BFT) protocol.
Speaker
Speaker biography is not available.

Auncel: Fair Byzantine Consensus Protocol with High Performance

Chen Wuhui (Sun Yat-sen University, China); Yikai Feng (Sun Yat-Sen University, China); Jianting Zhang (Purdue University, USA); Zhongteng Cai (Sun Yat-Sen University, China); Hong-Ning Dai (Hong Kong Baptist University, Hong Kong); Zibin Zheng (School of Data and Computer Science, Sun Yat-sen University, China)

0
Since the advent of decentralized financial applications based on blockchains, new attacks that take advantage of manipulating the order of transactions have emerged. To this end, order fairness protocols are devised to prevent such order manipulations. However, existing order fairness protocols adopt time-consuming mechanisms that bring huge computation overheads and defer the finalization of transactions to the following rounds, eventually compromising system performance. In this work, we present Auncel, a novel consensus protocol that achieves both order fairness and high performance. Auncel leverages a weight-based strategy to order transactions, enabling all transactions in a block to be committed within one consensus round, without cost computation and further delays. Furthermore, Auncel achieves censorship resistance by integrating the consensus protocol with the fair ordering strategy, ensuring all transactions can be ordered fairly. To reduce the overheads introduced by the fair ordering strategy, we also design optimization mechanisms, including dynamic transaction compression and adjustable replica proposal strategy. We implement a prototype of Auncel based on HotStuff and construct extensive experiments. Experimental results show that Auncel can increase the throughput by 6 times and reduce the confirmation latency by 3 times compared with state-of-the-art order fairness protocols.
Speaker
Speaker biography is not available.

CRACKLE: A Fast Sector-based BFT Consensus with Sublinear Communication Complexity

Hao Xu, Xiulong Liu, Chenyu Zhang, Wenbin Wang, Jianrong Wang and Keqiu Li (Tianjin University, China)

0
Blockchain systems widely employ Byzantine fault-tolerant (BFT) protocols to ensure consistency. Improving BFT protocols' throughput is crucial for large-scale blockchain systems. Frontier protocols face two problems: (i) the binary dilemma between leader bottleneck in star-based linear communication and compromised resilience in tree-based sublinear communication; and (ii) 2- or 3-round protocols restrict the phase number of one proposal, thereby limiting the scalability and parallelism of the pipeline. To overcome the above problems, this paper proposes CRACKLE, the first sector-based pipelined BFT protocol with a sublinear communication complexity, for a throughput improvement of consensus protocol with max resilience of (N-1)/3. We propose a sector-based communication mode to disseminate messages from the leader to a subset of replicas in each phase to accelerate consensus and split the traditional two-round protocol into 2κ phases to increase the basic pipeline scale. When implementing CRACKLE, we address two technical challenges: (i) ensuring QC certification during continuous κ phases and (ii) achieving pipeline decoupling among shorter phases. We provide comprehensive theoretical proof of the correctness of CRACKLE. Real experimental results reveal that CRACKLE achieves up to 10.36x higher throughput compared with state-of-the-art BFT protocols such as Kauri and Hotstuff.
Speaker
Speaker biography is not available.

Expediting In-Network Federated Learning by Voting-Based Consensus Model Compression

Xiaoxin Su (Shenzhen University, China); Yipeng Zhou (Macquarie University, Australia); Laizhong Cui (Shenzhen University, China); Song Guo (The Hong Kong University of Science and Technology, Hong Kong)

0
Recently, federated learning (FL) has gained momentum because of its capability in preserving data privacy. To conduct model training by FL, multiple clients exchange model updates with a parameter server via Internet. To accelerate the communication speed, it has been explored to deploy a programmable switch (PS) in lieu of the parameter server to coordinate clients. The challenge to deploy the PS in FL lies in its scarce memory space, prohibiting running memory consuming aggregation algorithms on the PS. To overcome this challenge, we propose Federated Learning in-network Aggregation with Compression (FediAC) algorithm, consisting of two phases: client voting and model aggregating. In the former phase, clients report their significant model update indices to the PS to estimate global significant model updates. In the latter phase, clients upload global significant model updates to the PS for aggregation. FediAC consumes much less memory space and communication traffic than existing works because the first phase can guarantee consensus compression across clients. The PS easily aligns model update indices to swiftly complete aggregation in the second phase. Finally, we conduct extensive experiments by using public datasets to demonstrate that FediAC remarkably surpasses the state-of-the-art baselines in terms of model accuracy and communication traffic.
Speaker
Speaker biography is not available.

Session Chair

Suman Banerjee (University of Wisconsin, USA)

Enter Zoom


Gold Sponsor


Gold Sponsor


Student Travel Grants


Student Travel Grants


Student Travel Grants

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · INFOCOM 2023 · © 2024 Duetone Corp.