Session Opening

Opening, Awards, and Keynote

Conference
8:00 AM — 10:00 AM PDT
Local
May 21 Tue, 10:00 AM — 12:00 PM CDT
Location
Regency C/D

Networking Research in the Age of AI/ML: More Science, Less Hubris

Dr. Walter Willinger (Chief Scientist at NIKSUN, Inc.)

1
The raison d’être of science is to understand cause, not just effect. However, for many empirically observed Internet phenomena, establishing cause typically requires great diligence and significant efforts and involves (i) conjecturing physical explanations of the observed phenomenon, (ii) empirically validating these explanations (typically with new data that didn’t figure in the original discovery of the phenomenon), and (iii) demonstrating their capability to withstand the scrutiny of domain experts. For each of the recent decades of Internet research, I will illustrate with a concrete example where this approach has either been successful or where challenges remain. These examples include the observed ``self-similar” nature of measured Internet traffic (from the 1990s), the purported ``scale-free” nature of measured Internet router topologies (from the 2000s), and the elusive nature of the Internet’s peering fabric or AS-level topology (from the 2010s). As for the current decade that is experiencing a boom in the use of AI/ML for networking, I will discuss why this area calls for a drastic course correction that does away with the existing unfettered pursuit of black-box modeling and instead extolls the virtues of new learning models that we can understand, explain, and ultimately trust.
Speaker Dr. Walter Willinger, Chief Scientist at NIKSUN, Inc.
Walter Willinger received the Dipl. Math. degree from the ETH Zurich and M.S. and Ph.D. degrees in Operations Research and Industrial Engineering from Cornell University. He worked at Bellcore Applied Research and AT&T Labs-Research and is currently Chief Scientist at NIKSUN, Inc., a Princeton-based company developing industry-leading real-time and forensics-based cybersecurity and network performance solutions. An internationally recognized expert in the field of Internet measurement, Dr. Willinger has made pioneering contributions to the understanding of the temporal nature of real-world Internet traffic and the spatial structure of the physical and logical topologies of today’s Internet. The resulting groundbreaking insights have informed generations of researchers and engineers to design their new protocols, applications, and systems around facets of Internet behavior that do not change amidst an otherwise highly dynamic and uncertain environment. His research garnered the 1995 IEEE Communications Society (ComSoc) W.R. Bennett Prize Paper Award, the 1996 IEEE W.R.G. Baker Prize Award, two ACM/SIGCOMM Test-of-Time Paper Awards (2005 & 2016), the 2023 Applied Networking Research Prize, and his paper “On the self-similar nature of Ethernet traffic” was featured in the 2007 IEEE ComSoc publication “The Best of the Best—Fifty Years of Communications and Networking Research”, as one of the most influential papers in communications and networking in the last half century. He is a Fellow of IEEE, ACM, AT&T, SIAM, and AAIA.

Enter Zoom
Session Break-1-1

Coffee Break

Conference
10:00 AM — 11:00 AM PDT
Local
May 21 Tue, 12:00 PM — 1:00 PM CDT
Location
Regency Foyer & Hallway

Enter Zoom
Session A-1

A-1: Network Privacy

Conference
11:00 AM — 12:30 PM PDT
Local
May 21 Tue, 1:00 PM — 2:30 PM CDT
Location
Regency A

X-Stream: A Flexible, Adaptive Video Transformer for Privacy-Preserving Video Stream Analytics

Dou Feng (Huazhong University of Science and Technology, China); Lin Wang (Paderborn University, Germany); Shutong Chen (Guangxi University, China); Lingching Tung and Fangming Liu (Huazhong University of Science and Technology, China)

0
Video stream analytics (VSA) systems fuel many exciting applications that facilitate people's lives, but also raise critical concerns about exposing too much individuals' privacy. To alleviate these concerns, various frameworks have been presented to enhance the privacy of VSA systems. Yet, existing solutions suffer two limitations: (1) being scenario-customized, thus limiting the generality of adapting to multifarious scenarios, (2) requiring complex, imperative programming, and tedious process, thus largely reducing the usability of such systems. In this paper, we present X-Stream, a privacy-preserving video transformer that achieves flexibility and efficiency for a large variety of VSA tasks. X-Stream features three major novel designs: (1) a declarative query interface that provides a simple yet expressive interface for users to describe both their privacy protection and content exposure requirements, (2) an adaptation mechanism that dynamically selects the most suitable privacy-preserving techniques and their parameters based on the current video context, and (3) an efficient execution engine that incorporates optimizations for multi-task deduplication and inter-frame inference. We implement X-Stream and evaluate it with representative VSA tasks and public video datasets. The results show that X-Stream achieves significantly improved privacy protection quality and performance over the state-of-the-art, while being simple to use.
Speaker Shutong Chen (Guangxi University)

Shutong Chen is an Assistant Professor at Guangxi University in China. She received Ph.D. degree from Huazhong University of Science and Technology. Her research interests include edge computing and green computing.


Privacy-Preserving Data Evaluation via Functional Encryption, Revisited

Xinyuan Qian and Hongwei Li (University of Electronic Science and Technology of China, China); Guowen Xu (City University of Hong Kong, China); Haoyong Wang (University of Electronic Science and Technology of China, China); Tianwei Zhang (Nanyang Technological University, Singapore); Xianhao Chen (University of Hong Kong, China); Yuguang Fang (City University of Hong Kong, Hong Kong)

0
In cloud-based data marketplaces, the cardinal objective lies in facilitating interactions between data shoppers and sellers. This engagement allows shoppers to augment their internal datasets with external data, consequently leading to significant enhancements in their machine learning models. Nonetheless, given the potential diversity of data values, it becomes critical for consumers to assess the value of data before cementing any transactions. Recently, Song et al. introduced Primal (publish in ACSAC), the pioneering cloud-assisted privacy-preserving data evaluation (PPDE) strategy. This strategy relies on variants of functional encryption (FE) as the underlying framework, conferring notable performance advantages over alternative cryptographic primitives such as secure multi-party computation and homomorphic encryption. However, in this paper, we regretfully highlight that Primal is susceptible to inadvertent misuse of FE, and leaves much-desired room for performance amelioration. To combat this, we introduce a novel cryptographic primitive known as labeled function-hiding inner-product encrypted. This new primitive serves as a remedy and forms the foundation for designing the concrete framework for PPDE. Furthermore, experiments conducted on real datasets demonstrate that our framework significantly reduces the overall computation cost of the current state-of-the-art secure PPDE scheme by roughly 10\(math\) and the communication cost for the data seller by about 2\(math\).
Speaker Xinyuan Qian (University of Electronic Science and Technology of China)

Xinyuan Qian is currently a Ph.D. student at the School of Computer Science and Engineering, University of Electronic Science and Technology of China, and a visiting researcher in Prof. Fang Yuguang's lab at City University of Hong Kong. His research interests include IBE, ABE, FE, applied cryptography, and privacy-preserving machine learning.


DPBalance: Efficient and Fair Privacy Budget Scheduling for Federated Learning as a Service

Yu Liu, Zibo Wang, Yifei Zhu and Chen Chen (Shanghai Jiao Tong University, China)

0
Federated learning (FL) has emerged as a prevalent distributed machine learning scheme that enables collaborative model training without aggregating raw data. Cloud service providers further embrace Federated Learning as a Service (FLaaS), allowing data analysts to execute their FL training pipelines over deferentially-protected data. Due to the intrinsic properties of differential privacy, the enforced privacy level on data blocks can be viewed as a privacy budget that requires careful scheduling to cater to diverse training pipelines. Existing privacy budget scheduling studies prioritize either efficiency or fairness individually. In this paper, we propose DPBalance, a novel privacy budget scheduling mechanism that jointly optimizes both efficiency and fairness. We first develop a comprehensive utility function incorporating data analyst-level dominant shares and FL-specific performance metrics. A sequential allocation mechanism is then designed using the Lagrange multiplier method and effective greedy heuristics. We theoretically prove that DPBalance satisfies Pareto Efficiency, Sharing Incentive, Envy-Freeness, and Weak Strategy Proofness. We also theoretically prove the existence of a fairness-efficiency tradeoff in privacy budgeting. Extensive experiments demonstrate that DPBalance outperforms state-of-the-art solutions, achieving an average efficiency improvement of \(1.44 \times \sim 3.49\times \), and an average fairness improvement of \(1.37 \times \sim 24.32 \times \).
Speaker Yu Liu (Shanghai Jiao Tong Univ.)



Optimal Locally Private Data Stream Analytics

Shaowei Wang, Yun Peng and Kongyang Chen (Guangzhou University, China); Wei Yang (University of Science and Technology of China, China)

0
Online data analytics with local privacy protection is widely adopted in real-world applications. Despite numerous endeavors in this field, significant gaps in utility and functionality remain when compared to its offline counterpart. This work demonstrates that private data analytics can be conducted online without excess utility loss, even at a constant factor.

We present an optimal, streamable mechanism for local differentially private sparse vector estimation. The mechanism enables a range of online analytics on streaming binary vectors, including multi-dimensional binary, categorical, or set-valued data. By leveraging the negative correlation of occurrence events in the sparse vector, we attain an optimal error rate under local privacy constraints, only requiring streamable computations during the input's data-dependent phase. Through experiments with both synthetic and real-world datasets, our proposals have been shown to reduce error rates by 40% to 60% compared to SOTA approaches.
Speaker Shaowei Wang (Guangzhou University)



Session Chair

Batyr Charyyev (University of Nevada Reno, USA)

Enter Zoom
Session B-1

B-1: Radio Access Networks

Conference
11:00 AM — 12:30 PM PDT
Local
May 21 Tue, 1:00 PM — 2:30 PM CDT
Location
Regency B

Det-RAN: Data-Driven Cross-Layer Real-Time Attack Detection in 5G Open RANs

Alessio Scalingi (IMDEA Networks, Spain); Salvatore D'Oro, Francesco Restuccia and Tommaso Melodia (Northeastern University, USA); Domenico Giustiniano (IMDEA Networks Institute, Spain)

0
Fifth generation (5G) and beyond cellular networks are vulnerable to security threats, mainly due to the lack of integrity protection in the Radio Resource Control (RRC) layer. To address this problem, we propose a real-time anomaly detection framework that builds on the concept of distributed applications in 5G Open RAN networks. Specifically, we analyze the spectrum-level characteristics and infer in a novel way the time of arrival of uplink packets lacking integrity protection, and we identify legitimate message sources and detect suspicious activities through an Artificial Intelligence (AI) design at the edge that handles cross-layer data and demonstrates that Open RAN-based applications can be designed to provide additional security to the network. Our solution is first validated in extensive emulation environments achieving over 85% accuracy in predicting potential attacks on unseen test scenarios. We then integrate our approach into a real-world prototype with a large channel emulator to assess its real-time performance and costs, achieving low-latency real-time constraints of 2 ms. This makes our solution suitable for real-world deployments.
Speaker
Speaker biography is not available.

Providing UE-level QoS Support by Joint Scheduling and Orchestration for 5G vRAN

Jiamei Lv, Yi Gao, Zhi Ding, Yuxiang Lin and Xinyun You (Zhejiang University, China); Guang Yang (Alibaba Group, China); Wei Dong (Zhejiang University, China)

0
Virtualized radio access networks (vRAN) enable network operators to run RAN functions on commodity servers instead of proprietary hardware. It has garnered significant interest due to its ability to reduce costs, provide deployment flexibility, and offer other benefits, particularly for operators of 5G private networks. However, the non-deterministic computing platforms pose difficulties to effective quality of service (QoS) provision, especially in the case of hybrid deployment of time-critical and throughput-demanding applications. Existing approaches including network slicing and other resource management schemes fail to provide fine-grained and effective QoS support at the User Equipments level. In this paper, we propose RT-vRAN, a UE-level QoS provision framework. RT-vRAN presents the first comprehensive analysis of the complicated impacts among key network parameters, e.g., network function splitting, resource block allocation, and modulation/coding scheme selection and builds an accurate and comprehensive network model. RT-vRAN also provides a fast network configurator which gives feasible configurations in seconds, making it possible to be practical in actual 5G vRAN. We implement RT-vRAN on OpenAirInterface and use simulation and testbed-base experiments to evaluate it. Results show that compared with existing works, RT-vRAN reduces the delay violation rate by 12\%--41\% under various network settings, while minimizing the total energy consumption.
Speaker Jiamei Lv (Zhejiang University)

Jiamei Lv is currently a Researcher at School of Software Technology, Zhejiang University. She received her Ph.D. degree from the College of Computer Science, Zhejiang University in 2023. Her research intresets includes Internet of Things, edge computing, and 5G.


ORANUS: Latency-tailored Orchestration via Stochastic Network Calculus in 6G O-RAN

Oscar Adamuz-Hinojosa (University of Granada, Spain); Lanfranco Zanzi (NEC Laboratories Europe, Germany); Vincenzo Sciancalepore (NEC Laboratories Europe GmbH, Germany); Andres Garcia-Saavedra (NEC Labs Europe, Germany); Xavier Costa-Perez (ICREA and i2cat & NEC Laboratories Europe, Spain)

0
The Open-Radio Access Network (O-RAN) Alliance has introduced a new architecture to enhance the 6th generation (6G) RAN. However, existing O-RAN-compliant solutions lack crucial details to perform effective control loops at multiple time scales. In this vein, we propose ORANUS, an O-RAN-compliant mathematical framework to allocate radio resources to multiple ultra Reliable Low Latency Communication (uRLLC) services at different time scales. In the near-RT control loop, ORANUS relies on a novel Stochastic Network Calculus (SNC)-based model to compute the amount of guaranteed radio resources for each uRLLC service. Unlike traditional approaches as queueing theory, the SNC-based model allows ORANUS to ensure the probability the packet transmission delay exceeds a budget, i.e., the violation probability, is below a target tolerance. ORANUS also utilizes a RT control loop to monitor service transmission queues, dynamically adjusting the guaranteed radio resources based on detected traffic anomalies. To the best of our knowledge, ORANUS is the first O-RAN-compliant solution which benefits from SNC to carry out near-RT and RT control loops. Simulation results show that ORANUS significantly improves over reference solutions, with an average violation probability 10x lower.
Speaker
Speaker biography is not available.

OREO: O-RAN intElligence Orchestration of xApp-based network services

Federico Mungari and Corrado Puligheddu (Politecnico di Torino, Italy); Andres Garcia-Saavedra (NEC Labs Europe, Germany); Carla Fabiana Chiasserini (Politecnico di Torino & CNIT, IEIIT-CNR, Italy)

0
The Open Radio Access Network (O-RAN) architecture aims to support a plethora of network services, such as beam management and network slicing, through the use of third-party applications called xApps. To efficiently provide network services at the radio interface, it is thus essential that the deployment of the xApps is carefully orchestrated. In this paper, we introduce OREO, an O-RAN xApp orchestrator, designed to maximize the number of offered services. OREO's key idea is that services can share xApps whenever they correspond to semantically equivalent functions, and the xApp output is of sufficient quality to fulfill the service requirements. By leveraging a multi-layer graph model that captures all the system components, from services to xApps, OREO implements an algorithmic solution that selects the best service configuration, maximizes the number of shared xApps, and efficiently and dynamically allocates resources to them. Numerical results as well as experimental tests performed using our proof-of-concept implementation, demonstrate that OREO closely matches the optimum, obtained by solving an NP-hard problem. Further, it outperforms the state of the art, deploying up to 35% more services with an average of 28% fewer xApps and a similar consequent reduction in the resource consumption.
Speaker
Speaker biography is not available.

Session Chair

Ning Lu (Queen's University, Canada)

Enter Zoom
Session C-1

C-1: UAV networking

Conference
11:00 AM — 12:30 PM PDT
Local
May 21 Tue, 1:00 PM — 2:30 PM CDT
Location
Regency C/D

Online Radio Environment Map Creation via UAV Vision for Aerial Networks

Neil C Matson (Georgia Institute of Technology, USA); Karthikeyan Sundaresan (Georgia Tech, USA)

0
Radio environment maps provide a comprehensive spatial view of the wireless channel and are especially useful in on-demand UAV wireless networks where operators are not afforded the typical time spent planning base station deployments. Equipped with an accurate radio environment map, a mobile UAV can quickly locate to an optimal location to serve UEs on the ground. Machine learning has recently been proposed as a tool to create radio environment maps from from satellite images of the target environment. However the highly dynamic nature that precipitates most ad-hoc aerial network deployments likely means that whatever satellite image data is available for the environment is inaccurate. In this paper we present, \system, a hybrid offline/online system for radio environment map creation which leverages a common sensing modality present on most UAVs: visual cameras. \system combines a suite of off-line trained neural network models with an adaptive trajectory planning algorithm to iteratively predict the REM and estimate the most valuable trajectory locations. By using UAV vision, \system arrives at a more accurate map quicker with fewer measurements, than other approaches, is effective even in scenarios where no prior environmental knowledge is available.
Speaker
Speaker biography is not available.

A Two Time-Scale Joint Optimization Approach for UAV-assisted MEC

Zemin Sun, Geng Sun, Long He and Fang Mei (Jilin University, China); Shuang Liang (Northeast Normal University, China); Yanheng Liu (Jilin University, China)

0
Unmanned aerial vehicles (UAV)-assisted mobile edge computing (MEC) is emerging as a promising paradigm to provide aerial-terrestrial computing services in close proximity to mobile devices. However, meeting the demands of computation-intensive and delay-sensitive tasks for MDs poses several challenges, including the demand-supply contradiction and heterogeneity between MDs and MEC servers, the trajectory control requirements of energy efficiency and timeliness, and the different time-scale dynamics of the network. To address these issues, we first present a hierarchical architecture by incorporating terrestrial-aerial computing capabilities and leveraging UAV flexibility. Furthermore, we formulate a joint computing resource allocation, computation offloading, and trajectory control problem (JCCTP) to maximize the system utility. Since the problem is a non-convex mixed integer nonlinear program (MINLP), we propose a two-time-scale joint optimization approach. In the short time scale, we propose a price-incentive method for on-demand computing resource allocation and a matching-based method for computation offloading. In the long time scale, we propose a convex optimization-based method for UAV trajectory control. Besides, we prove the stability, optimality, and polynomial complexity of TJCCT. Simulation results demonstrate that TJCCT outperforms the comparative algorithms in terms of the total utility of the system, aggregate QoE of MDs, and total revenue of MEC servers.
Speaker
Speaker biography is not available.

An Online Joint Optimization Approach for QoE Maximization in UAV-Enabled Mobile Edge Computing

Long He, Geng Sun and Zemin Sun (Jilin University, China); Pengfei Wang (Dalian University of Technology, China); Jiahui Li (Jilin University, China); Shuang Liang (Northeast Normal University, China); Dusit Niyato (Nanyang Technological University, Singapore)

0
Given flexible mobility, rapid deployment, and low cost, unmanned aerial vehicle (UAV)-enabled mobile edge computing (MEC) shows great potential to compensate for the lack of terrestrial edge computing coverage. However, limited battery capacity, computing and spectrum resources also pose serious challenges for UAV-enabled MEC, which shorten the service time of UAVs and degrade the quality of experience (QoE) of user devices (UDs) without effective control approach. In this work, we consider a UAV-enabled MEC scenario where a UAV serves as an aerial edge server to provide computing services for multiple ground UDs. Then, a joint task offloading, resource allocation, and UAV trajectory planning optimization problem (JTRTOP) is formulated to maximize the QoE of UDs under the UAV energy consumption constraint. To solve the JTRTOP that is proved to be the future-dependent and NP-hard problem, an online joint optimization approach (OJOA) is proposed. Specifically, the JTRTOP is first transformed into a per-slot real-time optimization problem (PROP) by using Lyapunov optimization framework. Then, a two-stage optimization method based on game theory and convex optimization is proposed to solve the PROP. Simulation results validate that the proposed approach can achieve superior system performance compared to the other benchmark schemes.
Speaker
Speaker biography is not available.

Near-Optimal UAV Deployment for Delay-Bounded Data Collection in IoT Networks

Shu-Wei Chang (National Yang Ming Chiao Tung University, Taiwan); Jian-Jhih Kuo (National Chung Cheng University, Taiwan); Mong-Jen Kao (National Yang-Ming Chiao-Tung University, Taiwan); Bo-Zhong Chen and Qian-Jing Wang (National Chung Cheng University, Taiwan)

1
The rapid growth of Internet of Things (IoT) applications has spurred to the need for efficient data collection mechanisms. Traditional approaches relying on fixed infrastructure have limitations in coverage, scalability, and deployment costs. Unmanned Aerial Vehicles (UAVs) have emerged as a promising alternative due to their mobility and flexibility. In this paper, we aim to minimize the number of UAVs deployed to collect data in IoT networks while considering a delay budget for energy limitation and data freshness. To this end, we propose a novel 3-approximation dynamic-programming-based algorithm called GPUDA to address the challenges of efficient data collection from IoT devices via UAVs for real-world scenarios where the number of UAVs owned by an individual or organization is unlikely to be excessive, improving the best-known ratio of 4. GPUDA is a geometric partition-based method that incorporates data rounding techniques. The experimental results demonstrate that the proposed algorithm requires 35.01% to 58.55% fewer deployed UAVs compared to existing algorithms on average.
Speaker
Speaker biography is not available.

Session Chair

Enrico Natalizio (University of Lorraine/Loria, France)

Enter Zoom
Session D-1

D-1: Federated Learning 1

Conference
11:00 AM — 12:30 PM PDT
Local
May 21 Tue, 1:00 PM — 2:30 PM CDT
Location
Regency E

AeroRec: An Efficient On-Device Recommendation Framework using Federated Self-Supervised Knowledge Distillation

Tengxi Xia and Ju Ren (Tsinghua University, China); Rao Wei, Zu Qin, Wang Wenjie and Chen Shuai (Mei Tuan, China); Yaoxue Zhang (Tsinghua University, China)

1
Modern recommendation systems, relying on centralized servers, necessitate users to upload their behavior data, raising significant privacy concerns. Federated learning (FL), designed to uphold user privacy, is emerging as an optimal solution. By fusing FL with recommendation systems, Federated Recommendation Systems (FRS) enable collaborative model training without exposing individual user data. Yet, current federated frameworks often overlook the constraints of mobile devices, such as limited storage and computational capabilities. While lightweight models can be apt for these devices, they need help with achieving desired accuracy, especially given the communication overheads and sparsity of recommendation data. We propose, AeroRec, an efficient on-device Federated Recommendation Framework. It employs federated self-supervised distillation to enhance the global model, promoting rapid convergence and surpassing typical lightweight model constraints. We demonstrate that AeroRec outperforms several state-of-the-art FRS frameworks regarding recommendation accuracy and convergence speed through extensive experiments on three real-world datasets.
Speaker Tengxi Xia (Tsinghua University)

Hello everyone, my name is Xia Tengxi. I completed my undergraduate degree in Software Engineering at Harbin University of Science and Technology. I am currently pursuing a doctoral degree in the Computer Science Department at Tsinghua University.


Agglomerative Federated Learning: Empowering Larger Model Training via End-Edge-Cloud Collaboration

Zhi Yuan Wu and Sheng Sun (Institute of Computing Technology, Chinese Academy of Sciences, China); Yuwei Wang (Institute of Computing Technology Chinese Academy of Sciences, China); Min Liu (Institute of Computing Technology, Chinese Academy of Sciences, China); Bo Gao (Beijing Jiaotong University, China); Quyang Pan, Tianliu He and Xuefeng Jiang (Institute of Computing Technology, China)

1
Federated Learning (FL) enables training Artificial Intelligence (AI) models over end devices without compromising their privacy. As computing tasks are increasingly performed by a combination of cloud, edge, and end devices, FL can benefit from this End-Edge-Cloud Collaboration (EECC) paradigm to achieve collaborative device-scale expansion with real-time access. Although Hierarchical Federated Learning (HFL) supports multi-tier model aggregation suitable for EECC, prior works assume the same model structure on all computing nodes, constraining the model scale by the weakest end devices. To address this issue, we propose Agglomerative Federated Learning (FedAgg), which is a novel EECC-empowered FL framework that allows the trained models from end, edge, to cloud to grow larger in size and stronger in generalization ability. FedAgg recursively organizes computing nodes among all tiers based on Bridge Sample Based Online Distillation Protocol (BSBODP), which enables every pair of parent-child computing nodes to mutually transfer and distill knowledge extracted from generated bridge samples. This design enhances the performance by exploiting the potential of larger models, with privacy constraints of FL and flexibility requirements of EECC both satisfied.
Experiments under various settings demonstrate that FedAgg outperforms state-of-the-art methods by an average of 4.53% accuracy gains and remarkable improvements in convergence rate.
Speaker Zhiyuan Wu (Institute of Computing Technology, Chinese Academy of Sciences)

Zhiyuan Wu currently is a research assistant with the Institute of Computing Technology, Chinese Academy of Sciences (ICT, CAS). He has contributed several technical papers to top-tier conferences and journals as the first author in the fields of computer architecture, computer networks, and intelligent systems, including IEEE Transactions on Parallel and Distributed Systems (TPDS), IEEE Transactions on Mobile Computing (TMC), IEEE International Conference on Computer Communications (INFOCOM), and ACM Transactions on Intelligent Systems and Technology (TIST). He has served as a technical program committee member or a reviewer for over 10 conferences and journals, and was invited to serve as a session chair for the International Conference on Computer Technology and Information Science (CTIS). He is a member of IEEE, ACM, the China Computer Federation (CCF), and is granted the President Special Prize of ICT, CAS. His research interests include federated learning, mobile edge computing, and distributed systems.


BR-DeFedRL: Byzantine-Robust Decentralized Federated Reinforcement Learning with Fast Convergence and Communication Efficiency

Jing Qiao (Shandong University, China); Zuyuan Zhang (George Washington University, USA); Sheng Yue (Tsinghua University, China); Yuan Yuan (Shandong University, China); Zhipeng Cai (Georgia State University, USA); Xiao Zhang (Shandong University, China); Ju Ren (Tsinghua University, China); Dongxiao Yu (Shandong University, China)

0
In this paper, we propose Byzantine-Robust Decentralized Federated Reinforcement Learning (BR-DeFedRL), an innovative framework that effectively combats the harmful influence of Byzantine agents by adaptively adjusting communication weights, thereby significantly enhancing the robustness of the learning system. By leveraging decentralized learning, our approach eliminates the dependence on a central server. Striking a harmonious balance between communication round count and sample complexity, BR-DeFedRL achieves efficient convergence with a rate of O(1/(TN)), where T denotes the communication rounds and N represents the local steps related to variance reduction. Notably, each agent attains an ε-approximation with a state-of-the-art sample complexity of O(1/(ε N)+1/ε). Extensive experimental validations further affirm the efficacy of BR-DeFedRL, making it a promising and practical solution for Byzantine-robust decentralized federated reinforcement learning.
Speaker
Speaker biography is not available.

Breaking Secure Aggregation: Label Leakage from Aggregated Gradients in Federated Learning

Zhibo Wang, Zhiwei Chang and Jiahui Hu (Zhejiang University, China); Xiaoyi Pang (Wuhan University, China); Jiacheng Du (Zhejiang University, China); Yongle Chen (Taiyuan University of Technology, China); Kui Ren (Zhejiang University, China)

0
Federated Learning (FL) exhibits privacy vulnerabilities under gradient inversion attacks (GIAs), which can extract private information from individual gradients. To enhance privacy, FL incorporates Secure Aggregation (SA) to prevent the server from obtaining individual gradients, thus effectively resisting GIAs. In this paper, we propose a stealthy label inference attack to bypass SA and recover individual clients' private labels. Specifically, we conduct a theoretical analysis of label inference from the aggregated gradients that are exclusively obtained after implementing SA. The analysis results reveal that the inputs (embeddings) and outputs (logits) of the final fully connected layer (FCL) contribute to gradient disaggregation and label restoration. To preset the embeddings and logits of FCL, we craft a fishing model by solely modifying the parameters of a single batch normalization (BN) layer in the original model. Distributing client-specific fishing models, the server can derive the individual gradients regarding the bias of FCL by resolving a linear system with expected embeddings and the aggregated gradients as coefficients. Then the labels of each client can be precisely computed based on preset logits and gradients of FCL's bias. Extensive experiments show that our attack achieves large-scale label recovery with 100\% accuracy on various datasets and model architectures.
Speaker Zhiwei Chang (Zhejiang University)

Hi, I am Zhiwei Chang, a graduate student at the School of Computer Science, Zhejiang University and my research focuses on security and privacy issues in federated learning.


Session Chair

Qin Hu (IUPUI, USA)

Enter Zoom
Session E-1

E-1: Network Measurement

Conference
11:00 AM — 12:30 PM PDT
Local
May 21 Tue, 1:00 PM — 2:30 PM CDT
Location
Regency F

Robust or Risky: Measurement and Analysis of Domain Resolution Dependency

Shuhan Zhang (Tsinghua University, China); Shuai Wang (Zhongguancun Laboratory, China); Dan Li (Tsinghua University, China)

0
DNS relies on domain delegation for good scalability, where domains delegate their resolution service to authoritative nameservers. However, such delegations could lead to complex inter-dependencies between DNS zones. While the complex dependency might improve the robustness of domain resolution, it could also introduce security risks unexpectedly. In this work, we perform a large-scale measurement on nearly 217M domains to analyze their resolution dependencies at both zone level and infrastructure level. According to our analysis, domains under country-code TLDs and new generic TLDs present a more complex dependency relationship. For robustness consideration, popular domains prefer to configure more complex dependencies. However, centralized hosting of nameservers and the silent outsourcing of DNS providers could lead to the false redundancy at infrastructure level. Worse, considerable domain configurations in the wild are "not robust but risky": a complex dependency is also likely to bring vulnerabilities, e.g., domains with a 2 times higher dependency complexity have a 2.87 times larger proportion suffering from the hijacking risk via lame delegation.
Speaker Shuhan Zhang (Tsinghua University)



Accelerating Sketch-based End-Host Traffic Measurement with Automatic DPU Offloading

Xiang Chen, Xi Sun, Wenbin Zhang, Zizheng Wang, Xin Yao, Hongyan Liu and Gaoning Pan (Zhejiang University, China); Qun Huang (Peking University, China); Xuan Liu (Yangzhou University & Southeast University, China); Haifeng Zhou and Chunming Wu (Zhejiang University, China)

0
In modern networks, sketch-based traffic measurement offers a promising building block for monitoring real-time traffic statistics and detecting network events to guarantee quality of services for tenant applications. However, existing approaches of building sketches in end-hosts either suffer from poor packet processing performance or result in non-trivial CPU consumption in end-hosts. In this paper, we propose MPU, a framework that automatically offloads sketch-based measurement from end-host CPU cores to the emerging type of hardware, i.e., DPU. To achieve this goal, MPU offers two components. First, its sketch analyzer profiles the DPU resource consumption of heterogeneous sketches such as the minimum number of DPU cores required to achieve the maximum performance. Second, its optimization framework encodes DPU resource capacity and analysis results as constraints while formulating the problem of offloading sketches onto DPU. It optimally solves the problem to maximize sketch performance on DPU. We have implemented MPU on the NVIDIA BlueField DPU. Our testbed results indicate that MPU outperforms existing approaches with 85% lower per-packet processing latency while achieving 47% higher traffic measurement accuracy.
Speaker
Speaker biography is not available.

Effective Network-Wide Traffic Measurement: A Lightweight Distributed Sketch Deployment

Fuliang Li and Kejun Guo (Northeastern University, China); Jiaxing Shen (Lingnan University, Hong Kong); Xingwei Wang (Northeastern University, China)

0
Network measurement is critical for various network applications, but scaling measurement techniques to the network-wide level is challenging for existing sketch-based solutions. Centralized sketch deployment provides low resource usage but suffers from poor load balancing. In contrast, collaborative measurement achieves load balancing through traffic distribution across switches but requires high resource usage. This paper presents a novel lightweight distributed deployment framework that overcomes the limitations above. First, our framework is lightweight such that it splits sketches into segments and allocates them across forwarding paths to minimize resource usage and achieve load balance. This also enables per-packet load balancing by distributing computations across switches. Second, our framework is also optimized for load balancing by coordinating between flows and enabling finer-grained traffic division. We evaluate the proposed framework on various network topologies and different sketch deployments. Results indicate our solution matches the load balancing of collaborative measurement while approaching the low resource usage of centralized deployment. Moreover, it achieves superior performance in per-packet load balancing, which is not considered in previous deployment policies. Our work provides efficient distributed sketch deployment to strike a balance between load balance and resource usage enabling effective network-wide measurement.
Speaker Kejun Guo(Northeastern University, China)



QM-RGNN: An Efficient Online QoS Measurement Framework with Sparse Matrix Imputation for Distributed Edge Clouds

Heng Zhang, Zixuan Cui, Shaoyuan Huang, Deke Guo and Xiaofei Wang (Tianjin University, China); Wenyu Wang (Shanghai Zhuichu Networking Technologies Co., Ltd., China)

0
Measurements for the quality of end-to-end network services (QoS) are crucial to ensure stability, reliability, and user experience for distributed edge clouds. Existing QoS measurement uses sparse measured QoS data to estimate unmeasured QoS data. But they suffer from low estimation accuracy facing QoS data with high sparsity or significant volatility. Moreover, they also consume high computational costs of continuous measurements and lack of adaptivity. Our preliminary analysis reveals that end-to-end QoS is strongly temporal-spatial related. It inspires us to leverage partially measured QoS data to impute temporal-spatial-related unmeasured QoS data for reducing measurement costs. We propose a novel QoS measurement framework based on Residual-Graph-Neural-Network (QM-RGNN). It consists of three core components: 1) a learnable dynamic adaptive sample ratio to reduce the sampling costs. 2) a residual module is introduced in the encoder-decoder model to tackle highly sparse and volatile network QoS data; 3) an online learning pattern is designed to reduce continuous training costs. Experiments on two real-world edge cloud datasets demonstrate the superiority of QM-RGNN in QoS measurement. It obtains at least a 37.5% reduction of relative RMSE between ground-truth and predicted QoS data with up to 90% training cost reduction and 22.7% sampling cost reduction.
Speaker
Speaker biography is not available.

Session Chair

Deepak Nadig (Purdue University, USA)

Enter Zoom
Session F-1

F-1: Network Security 1

Conference
11:00 AM — 12:30 PM PDT
Local
May 21 Tue, 1:00 PM — 2:30 PM CDT
Location
Prince of Wales/Oxford

A De-anonymization Attack Against Downloaders in Freenet

Yonghuan Xu, Ming Yang and Zhen Ling (Southeast University, China); Zixia Liu (Anhui University of Technology, China); Xiaodan Gu (Southeast University, China); Lan Luo (Anhui University of Technology, China)

0
Freenet is a well-known anonymous communication system that enables file sharing among users. It employs a probabilistic hops-to-live (HTL) decrement approach to hide the originator among nodes in a multi-hop path. Therefore, all nodes shall exhibit identical behaviors to preserve anonymity. However, we discover that the path folding mechanism in Freenet violates this principle due to behavior discrepancy between downloaders and intermediate nodes. The path folding mechanism is designed to help Freenet evolve into a navigable small-world network. A delayed path folding message by a successor node may incur a timeout event at its predecessor, and an intermediate node reacts differently to such timeout with a downloader. Therefore, malicious nodes can deliberately trigger the timeout event to identify downloaders. The complex implementation of the path folding timeout detection mechanism in Freenet complicates our de-anonymization attack. We thoroughly analyze the underlying cause and develop three strategies to manipulate three types of messages respectively at the malicious node, minimizing the false positive rate. We conduct extensive real-world experiments to verify the feasibility and effectiveness of our attack. They show that our attack achieves a true positive rate of 100\% and false positive rate of near 0\% under two different Freenet download modes.
Speaker
Speaker biography is not available.

Trace-agnostic and Adversarial Training-resilient Website Fingerprinting Defense

Litao Qiao, Bang Wu, Heng Li, Cuiying Gao and Wei Yuan (Huazhong University of Science and Technology, China); Xiapu Luo (The Hong Kong Polytechnic University, Hong Kong)

0
Deep neural network (DNN) based website fingerprinting (WF) attacks can achieve an attack success rate (ASR) of over 90%, seriously threatening the privacy of Tor users. At present, adversarial example (AE) based defenses have demonstrated great potential to defend against WF attacks. However, existing AE-based defenses require knowing complete traffic trace for adversarial perturbation calculation, which is unrealistic in practice. Moreover, they may become ineffective once adversarial training (AT) is adopted by attackers. To mitigate these two problems, we propose a defense called ALERT. It generates adversarial perturbations without knowing traffic traces. Moreover, ALERT can effectively resist AT-aided WF attacks. The key idea of ALERT is to produce universal perturbations that vary from user to user. We conduct extensive experiments to evaluate ALERT. In the closed world, ALERT significantly surpasses four representative WF defenses, including the state-of-the-art (SOTA) defense AWA. Specifically, ALERT reduces the ASR of the SOTA DF attack to 12.68% and uses only 20.13% of communication bandwidth. In the open world, ALERT uses only 19.91% of bandwidth, reduces the True Positive Rate (TPR) of the DF attack to 37.46%, obviously outperforming the other defenses.
Speaker
Speaker biography is not available.

Explanation-Guided Backdoor Attacks on Model-Agnostic RF Fingerprinting

Tianya Zhao and Xuyu Wang (Florida International University, USA); Junqing Zhang (University of Liverpool, United Kingdom (Great Britain)); Shiwen Mao (Auburn University, USA)

0
Despite the proven capabilities of deep neural networks (DNNs) for radio frequency (RF) fingerprinting, their security vulnerabilities have been largely overlooked. Unlike the extensively studied image domain, few works have explored the threat of backdoor attacks on RF signals. In this paper, we analyze the susceptibility of DNN-based RF fingerprinting to backdoor attacks, focusing on a more practical scenario where attackers lack access to control model gradients and training processes. We propose leveraging explainable machine learning techniques and autoencoders to guide the selection of positions and values, enabling the creation of effective backdoor triggers in a model-agnostic manner. To comprehensively evaluate our backdoor attack, we employ four diverse datasets with two protocols (Wi-Fi and LoRa) across various DNN architectures. Given that RF signals are often transformed into the frequency or time-frequency domains, this study also assesses attack efficacy in the time-frequency domain. Furthermore, we experiment with potential defenses, demonstrating the difficulty of fully safeguarding against our attacks.
Speaker Tianya Zhao (Florida International University)

Tianya Zhao is a second-year Ph.D. student studying computer science at FIU, supervised by Dr. Xuyu Wang. Prior to this, he received his Master's degree from Carnegie Mellon University and Bachelor's degree from Hunan University. In his current Ph.D. program, he is focusing on AIoT, AI Security, Wireless Sensing, and Smart Health.


Exploiting Miscoordination of Microservices in Tandem for Effective DDoS Attacks

Anat Bremler-Barr (Tel-Aviv University, Israel); Michael Czeizler (Reichman University, Israel); Hanoch Levy (Tel Aviv University, Israel); Jhonatan Tavori (Tel-Aviv University, Israel)

1
Today's software development landscape has witnessed a shift towards microservices based architectures. Using this approach, large software systems are implemented by combining loosely-coupled services, each responsible for specific task and defined with separate scaling properties. Auto-scaling is a primary capability of cloud computing which allows systems to adapt to fluctuating traffic loads by dynamically increasing (scale-up) and decreasing (scale-down) the number of resources used. We observe that when microservices which utilize separate auto-scaling mechanisms operate in tandem to process traffic, they may perform ineffectively, especially under overload conditions, due to DDoS attacks. This can result in throttling (Denial of service -- DoS) and over-provisioning of resources (Economic Denial of Sustainability -- EDoS). This paper demonstrates how an attacker can exploit the tandem behavior of microservices with different auto-scaling mechanisms to create an attack we denote as the Tandem Attack. We demonstrate the attack on a typical Serverless architecture and analyze its economical and performance damages. One intriguing finding is that some attacks may make a cloud customer paying for service-denied requests. We conclude that independent scaling of loosely coupled components might form an inherent difficulty and end-to-end controls might be needed.
Speaker Jhonatan Tavori (TAU)

Jhonatan is a fourth-year Computer Science PhD Student at TAU, advised by Prof. Hanoch Levy. His research focuses on analysing the operation of stochastic systems and networks performance in the presence of malicious behavior.


Session Chair

Hrishikesh B Acharya (Rochester Institute of Technology, USA)

Enter Zoom
Session Lunch-1

Conference Lunch (for Registered Attendees)

Conference
12:30 PM — 2:00 PM PDT
Local
May 21 Tue, 2:30 PM — 4:00 PM CDT
Location
Georgia Ballroom and Plaza Ballroom (2nd Floor)

Enter Zoom
Session A-2

A-2: Blockchains

Conference
2:00 PM — 3:30 PM PDT
Local
May 21 Tue, 4:00 PM — 5:30 PM CDT
Location
Regency A

A Generic Blockchain-based Steganography Framework with High Capacity via Reversible GAN

Zhuo Chen, Liehuang Zhu and Peng Jiang (Beijing Institute of Technology, China); Jialing He (Chongqing University, China); Zijian Zhang (Beijing Institute of Technology, China)

0
Blockchain-based steganography enables data hiding via encoding the covert data into a specific blockchain transaction field. However, previous works focus on the specific field-embedding methods while lack a consideration on required field-generation embedding. In this paper, we propose GBSF, a generic framework for blockchain-based steganography. The sender generates the required fields, where the additional covert data is embedded to enhance the channel capacity. Based on GBSF, we design R-GAN that utilizes the generative adversarial network (GAN) with a reversible generator to generate the required fields and encode additional covert data into the input noise of the reversible generator. We then explore the performance flaw of R-GAN and introduce CCR-GAN as an improvement. CCR-GAN employs a counter-intuitive data preprocessing mechanism to reduce decoding errors in covert data. It incurs gradient explosion for model convergence and we design a custom activation function. We conduct experiments using the transaction amount of the Bitcoin mainnet as the required field. The results demonstrate that R-GAN and CCR-GAN allow to embed 11-bit (embedding rate of 17.2%) and 24-bit (embedding rate of 37.5%) covert data within a transaction amount, and enhance the channel capacity of state-of-the-art works by 4.30% to 91.67% and 9.38% to 200.00%, respectively.
Speaker Zhuo Chen (Beijing Institute of Technology)

Zhuo Chen received the B.E. degree in information security from the North China Electric Power University, Beijing, China, in 2019. He is currently pursuing the Ph.D. degree with the School of Cyberspace Science and Technology, Beijing Institute of Technology. His current research interests include blockchain technology and covert communication.


Broker2Earn: Towards Maximizing Broker Revenue and System Liquidity for Sharded Blockchains

Qinde Chen, Huawei Huang and Zhaokang Yin (Sun Yat-Sen University, China); Guang Ye (Sen Yat-Sen University, China); Qinglin Yang (Sun Yat-Sen University, China)

0
Plenty of state-of-the-art blockchain protocols have been proposed to diminish CTXs. For example in BrokerChain, intermediary broker accounts can help turn CTXs into intra-shard transactions through their voluntary liquidity services. However, we found that BrokerChain is impractical for a sharded blockchain because its inventors didn't consider how to recruit a sufficient number of broker accounts, thus blockchain clients don't have motivations to provide token liquidity for others. To address this challenge, we design Broker2Earn, which is an incentive mechanism for blockchain users who could choose to become brokers. Via participating in Broker2Earn, brokers can earn native revenues when they collateralize their tokens to the protocol. Furthermore, Broker2Earn can also benefit the sharded blockchain since it can efficiently provide liquidity to diminish CTXs. We formulate the core module of Broker2Earn into a revenue-maximization problem, which is proven NP-hard. To solve this problem, we design an online approximation algorithm using relax-and-rounding. We also rigorously analyze the approximation ratio of our online algorithm. Finally, we conduct extensive experiments using Ethereum transactions on an open-source blockchain testbed. The evaluation results show that Broker2Earn demonstrates a near-optimal performance that outperforms other baselines in terms of broker revenues and the usage of system liquidity.
Speaker
Speaker biography is not available.

FileDES: A Secure Scalable and Succinct Blockchain-based Decentralized Encrypted Storage Network

Minghui Xu (Shandong University, China); JiaHao Zhang (ShanDong University, China); Hechuan Guo, Xiuzhen Cheng and Dongxiao Yu (Shandong University, China); Qin Hu (IUPUI, USA); Yijun Li and Yipu Wu (BaishanCloud, China)

0
Decentralized Storage Network (DSN) is an emerging technology that challenges traditional cloud-based storage systems by consolidating storage capacities from independent providers and coordinating to provide decentralized storage and retrieval services. However, current DSNs face several challenges associated with data privacy and efficiency of the proof systems. To address these issues, we propose FileDES (\uline{D}ecentralized \uline{E}ncrypted \uline{S}torage), which incorporates three essential elements: privacy preservation, scalable storage proof, and batch verification. FileDES provides encrypted data storage while maintaining data availability, with a scalable Proof of Encrypted Storage (PoES) algorithm that is resilient to Sybil and Generation attacks. Additionally, we introduce a rollup-based batch verification approach to simultaneously verify multiple files using publicly verifiable succinct proofs. We conducted a comparative evaluation on FileDES, Filecoin, Storj and Sia under various conditions, including a WAN composed of up to 120 geographically dispersed nodes. Our protocol outperforms the others in terms of proof generation/verification efficiency, storage costs, and scalability.
Speaker
Speaker biography is not available.

Account Migration across Blockchain Shards using Fine-tuned Lock Mechanism

Huawei Huang, Yue Lin and Zibin Zheng (Sun Yat-Sen University, China)

0
Sharding is one of the most promising techniques for improving blockchain scalability. In blockchain state sharding, account migration across shards is crucial to low ratio of cross-shard transactions and workload balance among shards. Through reviewing state-of-the-art protocols proposed to reconfigure blockchain shards via account shuffling, we find that the account migration plays a significant role. From the literature, we only find a related work which utilizes the lock mechanism to realize account migration. We call this method the SOTA Lock, in which both the target account's state and its associated transactions need to be locked when migrating this account between shards, thereby causing a high makespan for the associated transactions. To address these challenges of account migration, we propose a dedicated Fine-tuned Lock protocol. Unlike SOTA Lock, the proposed Fine-tuned Lock enables real-time processing of the affected transactions during account migration. Thus, the makespan of associated transactions can be lowered. We implement our Fine-tuned Lock protocol on an open-sourced blockchain testbed and deploy it in Tencent Cloud. The experimental results show that the proposed Fine-tuned Lock outperforms the SOTA Lock in terms of transaction makespan. For example, our Fine-tuned Lock achieves around 30% of transaction makespan comparing with SOTA Lock.
Speaker
Speaker biography is not available.

Session Chair

Xiaodong Lin (University of Guelph, Canada)

Enter Zoom
Session B-2

B-2: MIMO and Beamforming

Conference
2:00 PM — 3:30 PM PDT
Local
May 21 Tue, 4:00 PM — 5:30 PM CDT
Location
Regency B

NOMA-Enhanced Quantized Uplink Multi-user MIMO Communications

Thanh Phung Truong, Anh-Tien Tran and Van Dat Tuong (Chung-Ang University, Korea (South)); Nhu-Ngoc Dao (Sejong University, Korea (South)); Sungrae Cho (Chung-Ang University, Korea (South))

0
This research examines quantized uplink multi-user MIMO communication systems with low-resolution quantizers at users and base stations (BS). In such a system, we employ the non-orthogonal multiple access (NOMA) technique for communication between users and the BS to enhance communication performance. To maximize the number of users that satisfy the quality of service (QoS) requirement while minimizing the user's transmit power, we jointly optimize the transmit power and precoding matrices at the users and the digital beamforming matrix at the BS. Owing to the non-convexity of the objective function, we transform the problem into a reinforcement learning-based problem and propose a deep reinforcement learning (DRL) framework named QNOMA-DRLPA to overcome the challenge. Because the nature of the action decided by the DRL algorithm may not satisfy the problem constraints, we propose a post-actor process to redesign the actions to meet all the problem constraints. In the simulation, we assess the proposed framework's performance in training convergence and demonstrate its superior performance under various environmental parameters compared with other benchmark schemes.
Speaker
Speaker biography is not available.

A Learning-only Method for Multi-Cell Multi-User MIMO Sum Rate Maximization

Qingyu Song (The Chinese University of Hong Kong, Hong Kong); Juncheng Wang (Hong Kong Baptist University, Hong Kong); Jingzong Li (City University of Hong Kong, Hong Kong); Guochen Liu (Huawei Noah's Ark Lab, China); Hong Xu (The Chinese University of Hong Kong, Hong Kong)

0
Solving the sum rate maximization problem for interference reduction in multi-cell multi-user multiple-input multiple-output (MIMO) wireless communication systems has been investigated for a decade. Several machine learning-assisted methods have been proposed under conventional sum rate maximization frameworks, such as the Weighted Minimum Mean Square Error (WMMSE) framework. However, existing learning-assisted methods suffer from a deficiency in parallelization, and their performance is intrinsically bounded by WMMSE. In contrast, we propose a structural learning-only framework from the abstraction of WMMSE. Our proposed framework increases the solvability of the original MIMO sum rate maximization problem by dimension expansion via a unitary learnable parameter matrix to create an equivalent problem in a higher dimension. We then propose a structural solution updating method to solve the higher dimensional problem, utilizing neural networks to generate the learnable matrix-multiplication parameters. The proposed structural solution updating method achieves lower complexity than WMMSE thanks to its parallel implementation. Simulation results under practical communication network settings demonstrate that our proposed learning-only framework achieves up to 98\% optimality over state-of-the-art algorithms while providing up to 47\(\times\) acceleration in various scenarios.
Speaker Qingyu Song (The Chinese University of Hong Kong)

Qingyu is a Ph.D. student at The Chinese University of Hong Kong. He got an M.S. and B.S. at Tsinghua University and Harbin Institute of Technology in 2021 and 2018, respectively. He focuses on utilizing machine learning techniques to solve optimization problems. His work has been accepted by CVPR, INFOCOM, ITSC, etc.


HoloBeam: Learning Optimal Beamforming in Far-Field Holographic Metasurface Transceivers

Debamita Ghosh and Manjesh K Hanawal (Indian Institute of Technology Bombay, India); Nikola Zlatanov (Innopolis University, Russia)

0
Holographic Metasurface Transceivers (HMTs) are emerging as cost-effective substitutes to large antenna arrays for beamforming in Millimeter and TeraHertz wave communication. However, to achieve desired channel gains through beamforming in HMT, phase-shifts of a large number of elements need to be appropriately set, which is challenging. Also, these optimal phase-shifts depend on the location of the receivers, which could be unknown. In this work, we develop a learning algorithm using a fixed-budget multi-armed bandit framework to beamform and maximize received signal strength at the receiver for far-field regions. Our algorithm, named Holographic Beam (HoloBeam) exploits the parametric form of channel gains of the beams, which can be expressed in terms of two phase-shifting parameters. Even after parameterization, the problem is still challenging as phase-shifting parameters take continuous values. To overcome this, HoloBeam works with the discrete values of phase-shifting parameters and exploits their unimodal relations with channel gains to learn the optimal values faster. We upper bound the probability of HoloBeam incorrectly identifying the (discrete) optimal parameters in terms of the number of pilots used in learning. We show that this probability decays exponentially with the number of pilot signals. We demonstrate that HoloBeam outperforms state-of-the-art algorithms through extensive simulations.
Speaker
Speaker biography is not available.

FTP: Enabling Fast Beam-Training for Optimal mmWave Beamforming

Wei-Han Chen, Xin Liu, Kannan Srinivasan and Srinivasan Parthasarathy (The Ohio State University, USA)

0
To maximize Signal-to-Noise Ratio (SNR), it is necessary to move beyond selecting beams from a codebook. While the state-of-the-art approaches can significantly improve SNR compared to codebook-based beam selection by exploiting the globally-optimal beam, they incur significant beam-training overhead, which limits the applicability to large-scale antenna arrays and the scalability for multiple users. In this paper, we propose FTP, a highly-scalable beam-training solution that can find the globally-optimal beam with minimal beam-training overhead. FTP works by estimating per-path direction along with its complex gain and synthesizes the globally-optimal beam from these parameters. Our design significantly reduces the search space for finding such path parameters, which enables FTP to scale to large-scale antenna arrays. We implemented and evaluated FTP on a mmWave experimental platform with 32 antenna elements. Our results demonstrate that FTP achieves optimal SNR performance comparable with the state-of-the-art while reducing the beam-training overhead by 3 orders of magnitude. Under simulated settings, we demonstrate that the gain of FTP can be even more significant for larger antenna arrays with up to 1024 elements.
Speaker
Speaker biography is not available.

Session Chair

Joerg Widmer (IMDEA Networks Institute, Spain)

Enter Zoom
Session C-2

C-2: Wireless Security

Conference
2:00 PM — 3:30 PM PDT
Local
May 21 Tue, 4:00 PM — 5:30 PM CDT
Location
Regency C/D

Silent Thief: Password Eavesdropping Leveraging Wi-Fi Beamforming Feedback from POS Terminal

Siyu Chen, Hongbo Jiang, Jingyang Hu, Zhu Xiao and Daibo Liu (Hunan University, China)

0
Nowadays, point-of-sale (POS) terminals are no longer limited to wired connections, and many of them rely on Wi-Fi for data transmission. While Wi-Fi provides the convenience of wireless connectivity, it also introduces significant security risks. Previous research has explored Wi-Fi-based eavesdropping methods. However, these methods often rely on limited environmental robustness of Channel State Information (CSI) and require invasive Wi-Fi hardware, making them impractical in real-world scenarios. In this work, we present SThief, a practical Wi-Fi-based eavesdropping attack that leverages beamforming feedback information (BFI) exchanged between POS terminal and access points (APs) to keystroke inference on POS keypads. By capitalizing on the clear-text transmission characteristics of BFI, this attack demonstrates a more flexible and practical nature, surpassing traditional CSI-based methods. BFI is transmitted in the uplink, carrying downlink channel information that allows the AP to adjust beamforming angles. We exploit this channel information to keystroke inference. To enhance the BFI series, we use maximal ratio combining (MRC), ensuring efficiency across various scenarios. Additionally, we employ the Connectionist Temporal Classification method for keystroke inference, providing exceptional generalization and scalability. Extensive testing validates \name's effectiveness, achieving an impressive 81% accuracy rate in inferring 6-digit POS passwords within the top-100 attempts.
Speaker Siyu Chen (Hunan University)

Siyu Chen received the B.S. degree in communication engineering from Hunan University, Changsha, China, in 2021, where he is currently pursuing the Ph.D. degree with the College of Computer Science and Electronic Engineer, Hunan University. He has published papers in IEEE INFOCOM, IEEE IoTJ, IEEE TMC and IEEE JSAC. His research interests lie in the area of wireless sensing and Internet of Things security.


Two-Way Aerial Secure Communications via Distributed Collaborative Beamforming under Eavesdropper Collusion

Jiahui Li and Geng Sun (Jilin University, China); Qingqing Wu (Shanghai Jiao Tong University, China); Shuang Liang (Northeast Normal University, China); Pengfei Wang (Dalian University of Technology, China); Dusit Niyato (Nanyang Technological University, Singapore)

0
Unmanned aerial vehicles (UAVs)-enabled aerial communication provides a flexible, reliable, and cost-effective solution for a range of wireless applications. However, due to the high line-of-sight (LoS) probability, aerial communications between UAVs are vulnerable to eavesdropping attacks, particularly when multiple eavesdroppers collude. In this work, we aim to introduce distributed collaborative beamforming (DCB) into UAV swarms and handle the eavesdropper collusion by controlling the corresponding signal distributions. Specifically, we consider a two-way DCB-enabled aerial communication between two UAV swarms and construct these swarms as two UAV virtual antenna arrays. Then, we minimize the two-way known secrecy capacity and the maximum sidelobe level to avoid information leakage from the known and unknown eavesdroppers, respectively. Simultaneously, we also minimize the energy consumption of UAVs for constructing virtual antenna arrays. Due to the conflicting relationships between secure performance and energy efficiency, we consider these objectives as a multi-objective optimization problem. Following this, we propose an enhanced multi-objective swarm intelligence algorithm via the characterized properties of the problem. Simulation results show that our algorithm outperforms other state-of-the-art baseline algorithms. Experimental tests demonstrate that our method can be deployed in limited computing power platforms of UAVs and is beneficial for saving computational resources.
Speaker
Speaker biography is not available.

EchoLight: Sound Eavesdropping based on Ambient Light Reflection

Guoming Zhang, Zhijie Xiang, Heqiang Fu, Yanni Yang and Pengfei Hu (Shandong University, China)

1
Sound eavesdropping using light has been an area of considerable interest and concern, as it can be achieved over long distances. However, previous work has often lacked stealth (e.g., active emission of laser beams) or been limited in the range of realistic applications (e.g., using direct light from a device's indicator LED or a hanging light bulb). In this paper, we present EchoLight, a non-intrusive, passive and long-range sound eavesdropping method that utilizes the extensive reflection of ambient light from vibrating objects to reconstruct sound. We analyze the relationship between reflection light signals and sound signals, particularly in situations where the frequency response of reflective objects and the efficiency of diffuse reflection are suboptimal. Based on this analysis, we have introduced an algorithm based on cGAN to address the issues of nonlinear distortion and spectral absence in the frequency domain of sound. We extensively evaluate EchoLight's performance in a variety of real-world scenarios. It demonstrates the ability to accurately reconstruct audio from a variety of source distances, attack distances, sound levels, light sources, and reflective materials. Our results reveal that the reconstructed audio exhibits a high degree of similarity to the original audio over 40 meters of attack distance.
Speaker Heqiang Fu (Shandong University)

Heqiang Fu is currently working towards a master’s degree at the School of Computer Science, Shandong University, China. His recent research has centered around Internet of Things (IoT) security.


mmEar: Push the Limit of COTS mmWave Eavesdropping on Headphones

Xiangyu Xu, Yu Chen and Zhen Ling (Southeast University, China); Li Lu (Zhejiang University, China); Luo Junzhou (Southeast University, China); Xinwen Fu (University of Massachusetts Lowell, USA)

0
Recent years have witnessed a surge of headphones (including in-ear headphones) usage in works and communications. Because of its privacy-preserve property, people feel comfortable having confidential communication wearing headphones and pay little attention to speech leakage. In this paper, we present an end-to-end eavesdropping system, mmEar, which shows the feasibility to launch an eavesdropping attack on headphones leveraging a commercial mmWave radar. Different from previous works that realize eavesdropping by sensing speech-induced vibrations with reasonable amplitude, mmEar focuses on capturing the extremely faint vibrations with a low signal-to-noise ratio on the surface of headphones. Toward this end, we propose a faint vibration emphasis (FVE) method that models and amplifies the mmWave responses to speech-induced vibrations on the In-phase and Quadrature (IQ) plane, followed by a deep denoising network to further improve the SNR. To achieve practical eavesdropping on various headphones and setups, we propose a cGAN model with a pretrain-finetune scheme, boosting the generalization ability and robustness of the attack by generating high-quality synthesis data. We evaluate mmEar with extensive experiments on different headphones and earphones and find that most of them can be compromised by the proposed attack for speech recovery.
Speaker Yu Chen(Southeast University)



Session Chair

Edmundo Monteiro (University of Coimbra, Portugal)

Enter Zoom
Session D-2

D-2: Multi-Armed Bandits

Conference
2:00 PM — 3:30 PM PDT
Local
May 21 Tue, 4:00 PM — 5:30 PM CDT
Location
Regency E

Achieving Regular and Fair Learning in Combinatorial Multi-Armed Bandit

Xiaoyi Wu and Bin Li (The Pennsylvania State University, USA)

0
Combinatorial multi-armed bandit refers to the model that aims to maximize cumulative rewards in the presence of uncertainty. Motivated by two important wireless network applications: multi-user interactive and panoramic scene delivery and timely information delivery of sensing sources, in addition to maximizing cumulative rewards, it is important to ensure fairness among arms (i.e., the minimum average reward required by each arm) and reward regularity (i.e., how often each arm receives the reward). In this paper, we develop a parameterized regular and fair learning algorithm to achieve these three objectives. In particular, the proposed algorithm linearly combines virtual queue-lengths (tracking the fairness violations), Time-Since-Last-Reward (TSLR) metrics, and Upper Confidence Bound (UCB) estimates in its weight measure. Here, TSLR is similar to age-of-information and measures the elapsed number of rounds since the last time an arm received a reward, capturing the reward regularity performance, and UCB estimates are utilized to balance the tradeoff between exploration and exploitation in online learning. Through capturing a key relationship between virtual queue-lengths and TSLR metrics and utilizing several non-trivial Lyapunov functions, we analytically characterize zero cumulative fairness violation, reward regularity, and cumulative regret performance under our proposed algorithm. These findings are corroborated by our extensive simulations.
Speaker
Speaker biography is not available.

Adversarial Combinatorial Bandits with Switching Cost and Arm Selection Constraints

Yin Huang (University of Miami, USA); Qingsong Liu (Tsinghua University, China); Jie Xu (University of Miami, USA)

0
The multi-armed bandits (MAB) framework is widely used for sequential decision-making under uncertainty. To address the increasing complexity of real-world systems and their operational requirements, researchers have proposed and studied various
extensions to the basic MAB framework. In this paper, we focus on an adversarial MAB problem inspired by real-world systems with combinatorial semi-bandit arms, switching costs, and anytime cumulative arm selection constraints. To tackle this challenging problem, we introduce the Block-structured Follow-the-Regularized-Leader (B-FTRL) algorithm. Our approach employs a hybrid Tsallis-Shannon entropy regularizer in arm selection and incorporates a block structure that divides time into blocks to minimize arm switching costs. The theoretical analysis shows that B-FTRL achieves a reward regret bound of \(O(T^\frac{2a-b+1}{1+a} + T^\frac{b}{1+a})\) and a switching regret bound of \(O(T^\frac{1}{1+a})\). By carefully selecting the values of \(a\) and \(b\), we are able to limit the total regret to \(O(T^{2/3})\) while satisfying the arm selection constraints in expectation. This outperforms the state-of-the-art regret bound of \(O(T^{3/4})\) and expected constraint violation bound \(o(1)\), which are derived in less challenging stochastic reward environments. Additionally, we provide a high-probability constraint violation bound of \(O(\sqrt{T})\). Numerical results are presented to demonstrate its superiority in comparison to other existing methods.
Speaker
Speaker biography is not available.

Backlogged Bandits: Cost-Effective Learning for Utility Maximization in Queueing Networks

Juaren Steiger (Queen's University, Canada); Bin Li (The Pennsylvania State University, USA); Ning Lu (Queen's University, Canada)

0
Bipartite queueing networks with unknown statistics, where jobs are routed to and queued at servers and yield server-dependent utilities upon completion, model a wide range of problems in communications and related research areas (e.g., call routing in call centers, task assignment in crowdsourcing, job dispatching to cloud servers). The utility maximization problem in bipartite queueing networks with unknown statistics is a bandit learning problem with delayed semi-bandit feedback that depends on the server queueing delay. In this paper, we propose an efficient algorithm that overcomes the technical shortcomings of the state-of-the-art and achieves square root regret, queue length, and feedback delay. Our approach also accommodates additional constraints, such as quality of service, fairness, and budgeted cost constraints, with constant expected peak violation and zero expected violation after a fixed timeslot. Empirically, our algorithm's regret is competitive with the state-of-the-art for some problem instances and outperforms it in others, with much lower delay and constraint violation.
Speaker Juaren Steiger (Queen's University)

Juaren Steiger is a PhD student at Queen's University in Canada studying machine learning and its applications to communication networks.


Edge-MSL: Split Learning on the Mobile Edge via Multi-Armed Bandits

Taejin Kim (CACI Intl. Inc. & Carnegie Mellon University, USA); Jinhang Zuo (University of Massachusetts Amherst & California Institute of Technology, USA); Xiaoxi Zhang (Sun Yat-sen University, China); Carlee Joe-Wong (Carnegie Mellon University, USA)

0
The emergence of 5G technology and edge computing enables the collaborative use of data by mobile users for scalable training of machine learning models. Privacy concerns and communication constraints, however, can prohibit users from offloading their data to a single server for training. Split learning, in which models are split between end users and a central server, somewhat resolves these concerns but requires exchanging information between users and the server in each local training iteration. Thus, splitting models between end users and geographically close edge servers can significantly reduce communication latency and training time. In this setting, users must decide to which edge servers they should offload part of their model to minimize the training latency, a decision that is further complicated by the presence of multiple, mobile users competing for resources. We present Edge-MSL, a novel formulation of the mobile split learning problem as a contextual multi-armed bandits framework. To counter scalability challenges with a centralized Edge-MSL solution, we introduce a distributed solution that minimizes competition between users for edge resources, reducing regret by at least two times compared to a greedy baseline. The distributed Edge-MSL approach improves trained model convergence with a 15% increase in test accuracy.
Speaker Taejin Kim

Taejin Kim is currently a research engineer at CACI, currently working in the area of distributed machine learning systems and security. Prior to joining CACI, he was a PhD student at Carnegie Mellon University, performing research in the area of mobile edge computing and distributed learning optimization.


Session Chair

Bo Ji (Virginia Tech, USA)

Enter Zoom
Session E-2

E-2: Scheduling 1

Conference
2:00 PM — 3:30 PM PDT
Local
May 21 Tue, 4:00 PM — 5:30 PM CDT
Location
Regency F

Age-minimal CPU Scheduling

Mengqiu Zhou and Meng Zhang (Zhejiang University, China); Howard Yang (Zhejiang University, China & University of Illinois at Urbana Champaign (UIUC), USA); Roy Yates (Rutgers University, USA)

0
The proliferation of real-time status updating applications and ubiquitous mobile devices have motivated the analysis and optimization of data freshness in the context of age of information. At the same time, increasing requirements on computer performance have inspired research on CPU scheduling, with a focus on reducing energy consumption. However, since prior CPU scheduling strategies have ignored data freshness, we formulate the first CPU scheduling problem that aims to minimize the long-term average age of information, subject to an average power constraint. In particular, we optimize CPU scheduling strategies that specify when the CPU sleeps and adapt the CPU speed (clock frequency) during the execution of update-processing tasks. We consider the timely CPU scheduling problem as a constrained semi-Markov decision process problem with uncountable space. We develop a value-iteration-based algorithm and further prove its convergence in infinite space to obtain the optimal policy. Compared with existing benchmarks in terms of long-term average AoI, numerical results show that our proposed scheme can reduce the AoI by up to 53\%, and obtains greater benefits when faced with a tighter power constraint. In addition, for a given AoI target, the timely CPU scheduling policy can save more than 50\% on energy consumption.
Speaker
Speaker biography is not available.

Cur-CoEdge: Curiosity-Driven Collaborative Request Scheduling in Edge-Cloud Systems

Yunfeng Zhao and Chao Qiu (Tianjin University, China); Xiaoyun Shi (TianJin University, China); Xiaofei Wang (Tianjin Key Laboratory of Advanced Networking, Tianjin University, China); Dusit Niyato (Nanyang Technological University, Singapore); Victor C.M. Leung (Shenzhen University, China & The University of British Columbia, Canada)

0
The collaboration between clouds and edges unlocks the full potential of edge-cloud systems. Edge-cloud platform has brought about significant decentralization, heterogeneity, complexity, and instability. These characteristics have posed unprecedented challenges to the optimal scheduling problem in the edge-cloud system, including inaccurate decision-making and slow convergence. In this paper, we propose a curiosity-driven collaborative request scheduling scheme in edge-cloud systems, namely Cur-CoEdge. To tackle the challenge of inaccurate decision-making, we introduce a time-scale and decision-level interaction mechanism. This mechanism employs a small-large-time-scale scheduling learning framework, facilitating mutual learning between different decision levels. To address the challenge of slow convergence, we investigate the underlying reasons, such as the sparse reward-setting in reinforcement learning. In response, we develop a curiosity-driven collaborative exploration approach that fosters intrinsic curiosity in the cloud and simultaneously motivates dispatchers to explore the environment both individually and collectively. The effectiveness of this collaborative exploration is also supported by theoretical proof of convergence. Finally, we implement a prototype system on a network hardware system along with two real-world traces. Evaluations demonstrate significant improvements, with up to a 26% increase in time efficiency, a 40% rise in system throughput, and a 71% enhancement in convergence speed.
Speaker Yunfeng Zhao (Tianjin University)

Yunfeng Zhao is a PhD candidate at the College of Intelligence and Computing, Tianjin University, China. Her current research interests include edge computing, edge intelligence, and distributed machine learning.


InNetScheduler: In-network scheduling for time- and event-triggered critical traffic in TSN

Xiangwen Zhuge, Xinjun Cai, Xiaowu He, Zeyu Wang, Fan Dang, Wang Xu and Zheng Yang (Tsinghua University, China)

0
Time-Sensitive Networking (TSN) is an enabling technology for Industry 4.0. Traffic scheduling plays a key role for TSN to ensure low-latency and deterministic transmission of critical traffic. As industrial network scales, TSN networks are expected to support a rising number of both time-triggered and event-triggered critical traffic (TCT and ECT). In this work, we present InNetScheduler, the first in-network TSN scheduling paradigm that boosts the throughput, i.e., number of scheduled data flows, of both traffic types. Different from existing approaches that conduct entire scheduling on the server, InNetScheduler leverages the computation resources on switches to promptly schedule latency-critical ECT, and delegate the computational-intensive TCT scheduling to server. The key innovation of InNetScheduler includes a Load-Aware Optimizer to mitigate ECT conflicts, a Relaxated ECT Scheduler to accelerate in-network computation, and End-to-End Determinism Guarantee to lower scheduling jitter. We fully implement a suite of InNetScheduler-compatible TSN switches with hardware-software co-design. Extensive experiments are conducted on both simulation and physical testbeds, and the results demonstrate InNetScheduler's superior performance. By unleashing the power of in-network computation, InNetScheduler points out a direction to extend the capacity of existing industrial networks.
Speaker Xiangwen Zhuge (Tsinghua Univeristy)

Xiangwen Zhuge is currently a PhD student in Software Engineering at Tsinghua University, where he also completed my undergraduate studies. His research primarily focuses on time-sensitive networking(TSN). 


Learning-based Scheduling for Information Gathering with QoS Constraints

Qingsong Liu, Weihang Xu and Zhixuan Fang (Tsinghua University, China)

0
The problem of wireless scheduling over unreliable channels has attracted much attention due to its great practicability in the Internet of things systems. Most previous work focuses on the throughput/energy consumption/operational cost optimization or the setting that the channel information is known a priori. In this paper, we consider a more generic setting to this problem that packets from different sources have different values, and each heterogeneous source has a distinct Quality of Service (QoS) requirement. The information about packet value and channel reliability is unknown in advance, and the controller schedules sources over time to maximize its collected packet values while providing a QoS guarantee for each source. For the stationary case where packet values are independent and identically distributed (i.i.d.), we propose an efficient learning policy based on linear-programming (LP) methodology, and show that it provably meets the QoS constraint of each source and only incurs a logarithmic regret. In the special case of known channel reliability, our algorithm can further guarantee a bounded regret. Furthermore, in the case of non-stationary packet values, we apply sliding window technique to our LP-based algorithm and prove that it still guarantees a sublinear regret while meeting each source's QoS requirement.
Speaker
Speaker biography is not available.

Session Chair

Mohamed Hefeeda (Simon Fraser University, Canada)

Enter Zoom
Session F-2

F-2: Network Security 2

Conference
2:00 PM — 3:30 PM PDT
Local
May 21 Tue, 4:00 PM — 5:30 PM CDT
Location
Prince of Wales/Oxford

WFGuard: An Effective Fuzz-Testing-Based Traffic Morphing Defense Against Website Fingerprinting

Zhen Ling and Gui Xiao (Southeast University, China); Lan Luo (Anhui University of Technology, China); Rong Wang and Xiangyu Xu (Southeast University, China); Guangchi Liu (Southeast University, USA)

0
Website Fingerprinting (WF) attack, a type of traffic analysis attack, enables a local and passive eavesdropper, situated between the Tor client and the Tor entry node, to deduce which websites the client is visiting. Currently, deep learning (DL) based WF attacks have overcome a number of proposed WF defenses, demonstrating superior performance compared to traditional machine learning (ML) based WF attacks. To mitigate this threat, we present FUZZD, a fuzz-testing-based traffic morphing WF defense technique. FUZZD employs fine-grained neuron information within WF classifiers to design a joint optimization function and then apply gradient ascent to maximize both neuron value and misclassification possibility in DL-based WF classifiers. During each traffic mutation cycle, we propose a gradient based dummy traffic injection pattern generation approach, continuously mutating the traffic until a pattern emerges that can successfully deceive the classifier. Finally, the pattern present in successful variant traces are extracted and applied as defense strategies to Tor traffic. Extensive evaluations reveal that FUZZD can effectively decrease the accuracy of DL-based WF classifiers (e.g., DF and Var-CNN) to a mere 4.43%, while only incurring an 11.04% bandwidth overhead. This highlights the potential efficacy of our approach in mitigating WF attacks.
Speaker Gui Xiao (Southeast University)



Catch Me if You Can: Effective Honeypot Placement in Dynamic AD Attack Graphs

Quang Huy Ngo (The University of Adelaide, Australia); Mingyu Guo and Hung Xuan Nguyen (University of Adelaide, Australia)

0
We study a Stackelberg game between an attacker and a defender on large Active Directory (AD) attack graphs where the defender employs a set of honeypots to stop the attacker from reaching high value targets. Contrary to existing works that focus on small and static attack graphs, AD graphs typically contain hundreds of thousands of nodes and edges and constantly change over time. We consider two types of attackers: a simple attacker who cannot observe honeypots, and a competent attacker who can. To jointly solve the game, we propose a mixed-integer programming (MIP) formulation. We observed that the optimal blocking plan for static graphs performs poorly in dynamic graphs. To solve the dynamic graph problem, we re-design the mixed-integer programming formulation by combining m MIP (dyMIP(m)) instances to produce a near optimal blocking plan. Furthermore, to handle the large number of dynamic graph instances, we use a clustering algorithm to efficiently find the m-most representative graph instances for a constant m (dyMIP(m)). We prove a lower-bound on the optimal blocking strategy for dynamic graphs and show that our dyMIP(m) algorithms produce close to optimal results for a range of AD graphs under realistic conditions.
Speaker
Speaker biography is not available.

PTPsec: Securing the Precision Time Protocol Against Time Delay Attacks Using Cyclic Path Asymmetry Analysis

Andreas Finkenzeller and Oliver Butowski (Technical University of Munich, Germany); Emanuel Regnath (Siemens AG, Germany); Mohammad Hamad and Sebastian Steinhorst (Technical University of Munich, Germany)

0
High-precision time synchronization is a vital prerequisite for many modern applications and technologies, including Smart Grids, Time-Sensitive Networking, and 5G networks. Although the Precision Time Protocol (PTP) can accomplish this requirement in trusted environments, it becomes unreliable in the presence of specific cyber attacks. Mainly, time delay attacks pose the highest threat to the protocol, enabling attackers to diverge targeted clocks undetected. Current solutions cannot sufficiently mitigate sophisticated delay attacks creating an urgent demand for effective countermeasures. This work proposes an approach to detect and counteract time delay attacks against PTP based on cyclic path asymmetry analysis via redundant paths. We provide a method to find redundant paths in arbitrary networks and exploit this redundancy to reveal undesirable asymmetries on the synchronization path. Furthermore, we present PTPsec, a secure PTP protocol implementation based on the latest IEEE 1588 standard. By integrating our solution into the existing protocol, we improve PTP to support reliable attack detection and mitigation. We validate our approach on a hardware testbed, which includes an attacker capable of performing static and incremental delay attacks at a microsecond precision. Our experimental results show that both attack scenarios can be reliably detected with effectively zero response time.
Speaker
Speaker biography is not available.

CARBINE: Exploring Additional Properties of HyperLogLog for Secure and Robust Flow Cardinality Estimation

Damu Ding (University of Oxford, United Kingdom (Great Britain))

0
Counting distinct elements (also named flow cardinality) of large data streams in the network are of primary importance since it can be used for many practical monitoring applications, including DDoS attack and malware spread detection. However, modern intrusion detection systems are struggling to reduce both memory and computational overhead for such measurements. Many algorithms are designed to estimate flow cardinality, in which HyperLogLog has been proved the most efficient due to its high accuracy and low memory usage. While HyperLogLog provides good performance on flow cardinality estimation, it has inherent algorithmic vulnerabilities that lead to both security and robustness issues. To overcome these issues, we first investigate two possible threats in HyperLogLog, and propose corresponding detection and protection solutions. Leveraging proposed solutions, we introduce CARBINE, an approach that aims at identifying and eliminating the threats that most probably mislead the output of HyperLogLog. We implement our CARBINE to evaluate the threat detection performance, especially in case of a practical network scenario under volumetric DDoS attack. The results show that our CARBINE can effectively detect different kinds of threats while performing even higher accuracy and update speed than original HyperLogLog.
Speaker
Speaker biography is not available.

Session Chair

Jun Zhao (Nanyang Technological University, Singapore)

Enter Zoom
Session Break-1-2

Coffee Break

Conference
3:30 PM — 4:00 PM PDT
Local
May 21 Tue, 5:30 PM — 6:00 PM CDT
Location
Regency Foyer & Hallway

Enter Zoom
Session A-3

A-3: Video Streaming

Conference
4:00 PM — 5:30 PM PDT
Local
May 21 Tue, 6:00 PM — 7:30 PM CDT
Location
Regency A

Gecko: Resource-Efficient and Accurate Queries in Real-Time Video Streams at the Edge

Liang Wang (Huazhong University of Science and Technology, China); Xiaoyang Qu (Ping An Technology (Shenzhen) Co., Ltd, China); Jianzong Wang (Pingan, China); Guokuan Li and Jiguang Wan (Huazhong University of Science and Technology, China); Nan Zhang (Ping An Technology (Shenzhen) Co., Ltd., China); Song Guo (The Hong Kong University of Science and Technology, Hong Kong); Jing Xiao (Ping An Insurance Company of China,Ltd., China)

0
Surveillance cameras are ubiquitous nowadays and users' increasing needs for accessing real-world information (e.g., finding abandoned luggage) have urged object queries in real-time videos. While recent real-time video query processing systems exhibit excellent performance, they lack utility in deployment in practice as they overlook some crucial aspects, including multi-camera exploration, resource contention, and content awareness. Motivated by these issues, we propose a framework Gecko, to provide resource-efficient and accurate real-time object queries of massive videos on edge devices. Gecko (i) obtains optimal models from the model zoo and assigns them to edge devices for executing current queries, (ii) optimizes resource usage of the edge cluster at runtime by dynamically adjusting the frame query interval of each video stream and forking/joining running models on edge devices, and (iii) improves accuracy in changing video scenes by fine-grained stream transfer and continuous learning of models. Our evaluation with real-world video streams and queries shows that Gecko achieves up to 2x more resource efficiency gains and increases overall query accuracy by at least 12% compared with prior work, further delivering excellent scalability for practical deployment.
Speaker Liang Wang (Huazhong University of Science and Technology)

Liang Wang is a third-year Master's student in the PDSL group at Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, advised by Prof. Jiguang Wan. His current research interests focus on computing and storage systems in cloud and edge environments. Before joining HUST, he earned a Bachelor's degree in Software Engineering from Wuhan University in 2021. Liang has also completed internships at PingCAP, Huawei Cloud, and Ping An Technology.


Rosevin: Employing Resource- and Rate-Adaptive Edge Super-Resolution for Video Streaming

Xiaoxi Zhang (Sun Yat-sen University, China); Haoran Xu (Sun Yat-Sen University, China); Longhao Zou (Peng Cheng Laboratory, Shenzhen & Southern University of Science and Technology, China); Jingpu Duan (Peng Cheng Laboratory, China); Chuan Wu (The University of Hong Kong, Hong Kong); Yali Xue and ZuoZhou Chen (Peng Cheng Laboratory, China); Xu Chen (Sun Yat-sen University, China)

0
Today's video streaming service providers have exploited cloud-edge collaborative networks for geo-distributed video delivery. The existing content delivery network (CDN) scheduling and adaptive bitrate algorithms may not fully utilize edge resources or lack a global control to optimize resource sharing. The emerging super-resolution (SR) approach can unleash the potential of leveraging computation resources to compensate for bandwidth consumption, by producing high-quality videos from low-resolution contents. Yet the uncertain SR resource sensitivity and its interplay with bitrate adaptation are under-explored. In this work, we propose \textit{Rosevin}, the first resource scheduler that jointly decides the bitrates and fine-grained resource allocation to perform SR at the edge, which can learn to optimize the long-term QoE for distributed end users. To handle the time-varying and complex space of decisions as well as a non-smooth objective function, \textit{Rosevin} realizes a novel online combinatorial learning algorithm, which nicely integrates convex optimization theories and online learning techniques. In addition to theoretically analyzing its performance, we implement an SR-assisted video streaming prototype of \textit{Rosevin} and demonstrate its advantages over several video delivery benchmarks.
Speaker
Speaker biography is not available.

TBSR: Tile-Based 360° Video Streaming with Super-Resolution on Commodity Mobile Devices

Lei Zhang and Haobin Zhou (Shenzhen University, China); Haiyang Wang (University of Minnesota at Duluth, USA); Laizhong Cui (Shenzhen University, China)

0
Streaming 360° videos demands excessive bandwidth. Tile-based streaming and super-resolution are two widely studied approaches to alleviate bandwidth shortage and enhance user experience in such real-time video streaming systems. The former prioritizes the transmission of a fraction of the 360° video according to the user viewport, while the latter enhances the streamed video in higher resolutions through computations. However, these two approaches bring substantial complexity and computation overhead and thus suffer from resource bottlenecks due to the constrained mobile hardware. This paper proposes TBSR, a practical mobile 360° video streaming system that incorporates in-time super-resolution with tile-based streaming on commodity mobile devices.
We present the designs of three key mechanisms, including a rate adaptation method with macro tile grouping to reduce decoding computations, a decoding and SR scheduler for different types of tasks to achieve the best cost efficiency, and the workload adjustment method to control the amount of tasks given the available capabilities. We further implement the TBSR prototype. Our performance evaluation shows that TBSR outperforms the existing methods, improving QoE quality by up to 32\% and bandwidth savings by 26\%.
Speaker
Speaker biography is not available.

Smart Data-Driven Proactive Push to Edge Network for User-Generated Videos

Xiaoteng Ma (Tsinghua University, China); Qing Li (Peng Cheng Laboratory, China); Junkun Peng (Tsinghua University, China); Gareth Tyson (The Hong Kong University of Science and Technology & Queen Mary University of London, Hong Kong); Ziwen Ye and Shisong Tang (Tsinghua University, China); Qian Ma (ByteDance Technology Co., Ltd., China); Shengbin Meng (ByteDance Inc., China); Gabriel-Miro Muntean (Dublin City University, Ireland)

0
Video Content Delivery Networks (CDNs) have started incorporating lightweight edge nodes to save costs, e.g., WiFi access points. Thus, it is necessary for CDNs to intelligently select which video files should be placed at their core data centers vs. these edge nodes. This is more complex than traditional CDN management, as lightweight edge nodes are far more numerous and unstable than data centers. With this in mind, we present SDPush -- a system for managing content placement in edge CDNs. SDPush tackles two problems. First, SDPush should select which files to proactive push. We build a file popularity prediction model that effectively identifies video files that will go on to receive many views. Second, SDPush should determine how many replicas per file to push. We design a model to predict the benefits of pushing particular files (regarding traffic savings) and then formulate the replica decision problem as a lightweight problem, which is solvable within seconds, even for platforms with millions of daily active users. Through a trace-driven evaluation and a live deployment on a real video platform, we validate SDPush's effectiveness, offloading peak-period traffic by 12.1% to 23.9% from the data center to edge nodes, thereby reducing the CDN costs.
Speaker Xiaoteng Ma

Xiaoteng Ma received his B.Eng. degree in 2017 and his Ph.D. in 2024. His research interests include edge-assisted multimedia delivery and resource allocation in hybrid cloud-edge-client networks.


Session Chair

Lin Wang (Paderborn University, Germany)

Enter Zoom
Session B-3

B-3: Satellite networks

Conference
4:00 PM — 5:30 PM PDT
Local
May 21 Tue, 6:00 PM — 7:30 PM CDT
Location
Regency B

Your Mega-Constellations Can be Slim: A Cost-Effective Approach for Constructing Survivable and Performant LEO Satellite Networks

Zeqi Lai, Yibo Wang, Hewu Li and Qian Wu (Tsinghua University, China); Qi Zhang (Zhongguancun Laboratory, China); Yunan Hou (Beijing Forestry University, China); Jun Liu and Yuanjie Li (Tsinghua University, China)

1
Recently we have witnessed the active deployment of satellite mega-constellations with hundreds to thousands of low earth orbit (LEO) satellites, constructing emerging LEO satellite networks (LSN) to provide ubiquitously Internet services globally. However, while the massive deployment of mega-constellations can improve the network survivability and performance of an LSN, it also involves additional sustainable challenges such as higher deployment cost, risk of satellite conjunction, and debris.

In this paper, we investigate the problem: from a network perspective, how many satellites exactly do we need to construct a survivable and performant LSN? To answer this question, we first formulate the survivable and performant LSN design (SPLD) problem, which aims to find the minimum number of needed satellites to construct an LSN that can provide a sufficient amount of redundant paths, link capacity, and acceptable latency for all communication pairs served by the LSN. Second, to efficiently solve the SPLD problem, we propose MEGAREDUCE, a requirement-driven optimization mechanism, which can calculate feasible solutions for SPLD in polynomial time. Finally, we conduct extensive trace-driven simulations to verify MEGAREDUCE's cost-effectiveness in constructing survivable and performant LSNs on demand and showcase how MEGAREDUCE can help optimize the incremental deployment and long-term maintenance of future LSNs.
Speaker Zeqi Lai (Tsinghua University)



Accelerating Handover in Mobile Satellite Network

Jiasheng Wu, Shaojie Su, Xiong Wang, Jingjing Zhang and Yue Gao (Fudan University, China)

1
In recent years, the construction of large Low Earth Orbit (LEO) satellite constellations such as Starlink spurs a huge interest from both acamedia and industry. The 6G standard has recognized LEO satellite networks as a key component of the future 6G network due to their wide coverage. However, terminals on the ground experience frequent, long-latency handover incurred by the fast travelling speed of LEO satellites, which negatively impacts latency-sensitive applications. To address this challenge, we propose a novel handover scheme in mobile LEO satellite networks which can considerably reduce the handover latency. The core idea is to predict users' access satellites, avoiding direct interaction between satellites and core network. We introduce a fine-grained transmission process to address the synchronization problem. Moreover, we reduce the computational complexity of prediction by utilizing known information, including computation results, satellite's access strategy, and their spatial distribution. Finally, we have built a prototype for mobile satellite network, which is driven by ephemeris of real LEO satellite constellations. Then, we have conducted extensive experiments and results demonstrate that our proposed handover scheme can considerably reduce the handover latency by 10x compared to the standard NTN handover scheme and two other existing schemes.
Speaker
Speaker biography is not available.

SKYCASTLE: Taming LEO Mobility to Facilitate Seamless and Low-latency Satellite Internet Services

Jihao Li, Hewu Li, Zeqi Lai, Qian Wu and Weisen Liu (Tsinghua University, China); Xiaomo Wang (China Academy of Electronics and Information Technology, China); Yuanjie Li and Jun Liu (Tsinghua University, China); Qi Zhang (Zhongguancun Laboratory, China)

0
Recent satellite constellations deployed in low earth orbit (LEO) are extending the boundary of today's Internet, constructing integrated space and terrestrial networks (ISTNs) to provide Internet services pervasively, not only for residential users, but also for mobile users such as airplanes. Efficiently managing global mobility and keeping connections active is critical for operators. However, our quantitative analysis identifies that existing mobility management (MM) schemes inherently suffer from frequent connection interruptions and long latency. The fundamental challenge stems from a unique characteristic of ISTNs: not only users are mobile, but also core network infrastructures (i.e., satellites) are changing locations in networks.

To facilitate seamless and low-latency Internet services, this paper presents SKYCASTLE, a novel network-based global mobility management mechanism. SKYCASTLE incorporates two key techniques to address connection interruptions caused by space-ground handovers. First, to reduce connection interruptions, SKYCASTLE adopts distributed satellite anchors to track the location changes of mobile nodes, manage handovers and accelerate routing convergence. Second, SKYCASTLE leverages an anchor manager to schedule MM functionalities at satellites to reduce deployment costs while guaranteeing latency. Extensive evaluations combining real constellation information and popular flight trajectories demonstrate that: SKYCASTLE can improve uninterrupted time by up to 55.8% and reduce latency by 47.8%.
Speaker Jihao Li (Tsinghua University)

Jihao Li is pursuing his Ph.D. degree in the Department of Computer Science and Technology, Tsinghua University. His current research areas include the routing, transport and mobility management of integrated space and terrestrial networks.


Resource-efficient In-orbit Detection of Earth Objects

QiYang Zhang (Beijing University of Posts & Telecommunications, China); Xin Yuan and Ruolin Xing (Beijing University of Posts and Telecommunications, China); Yiran Zhang (Beijing University of Posts and Telecommunication, China); Zimu Zheng (Huawei Technologies Co., Ltd, China); Xiao Ma and Mengwei Xu (Beijing University of Posts and Telecommunications, China); Schahram Dustdar (Vienna University of Technology, Austria); Shangguang Wang (Beijing University of Posts and Telecommunications, China)

0
With the rapid proliferation of large Low Earth Orbit (LEO) satellite constellations, a huge amount of in-orbit data is generated and needs to be transmitted to the ground for processing. However, traditional bent pipe architectures of LEO satellite constellations, which downlink raw data to the ground, are significantly restricted in transmission capability due to the scarce spectrum resources and limited satellite-ground connection duration. Orbital edge computing (OEC), which exploits the computation capacities of LEO satellites and processes the raw data in orbit, is envisioned as a promising solution to relieve the downlink transmission burden. Yet, with OEC, the bottleneck is shifted to the inelastic computation capacities and limited energy supply of satellites. To address both the in-orbit computation and downlink transmission bottleneck, we fully exploit the scarce satellite resources to compute and downlink as many images as possible that need to be downlinked to the ground. Therefore, we seek to explore satellite-ground collaboration and present a satellite-ground collaborative system named TargetFuse. TargetFuse incorporates a combination of techniques to minimize computing errors under energy and bandwidth constraints. Extensive experiments show that TargetFuse can reduce computing error by 3.4× on average, compared to onboard computing.
Speaker Qiyang Zhang (Beijing University of Posts and Telecommunications)

Qiyang Zhang is a Ph.D. candidate in computer science at the State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications. He is also a visiting student in the Distributed Systems Group at TU Wien from December 2022 to December 2023. His research interests include Satellite Edge Computing and Edge Intelligence.


Session Chair

Dimitrios Koutsonikolas (Northeastern University, USA)

Enter Zoom
Session C-3

C-3: Intrusion-Detection Systems

Conference
4:00 PM — 5:30 PM PDT
Local
May 21 Tue, 6:00 PM — 7:30 PM CDT
Location
Regency C/D

Genos: General In-Network Unsupervised Intrusion Detection by Rule Extraction

Ruoyu Li (Tsinghua University, China); Qing Li (Peng Cheng Laboratory, China); Yu Zhang (Tsinghua University & Shanghai Artificial Intelligence Laboratory, China); Dan Zhao (Peng Cheng Laboratory, China); Xi Xiao and Yong Jiang (Graduate School at Shenzhen, Tsinghua University, China)

0
Anomaly-based network intrusion detection systems (A-NIDS) use unsupervised models to detect unforeseen attacks. However, existing A-NIDS solutions suffer from low throughput, lack of interpretability, and high maintenance costs. Recent in-network intelligence (INI) exploits programmable switches to offer line-rate deployment of NIDS. Nevertheless, current in-network NIDS are either model-specific or only apply to supervised models. In this paper, we propose Genos, a general in-network framework for unsupervised A-NIDS by rule extraction, which consists of a Model Compiler, a Model Interpreter, and a Model Debugger. Specifically, observing benign data are multimodal and usually located in multiple subspaces in the feature space, we utilize a divide-and-conquer approach for model-agnostic rule extraction. In the Model Compiler, we first propose a tree-based clustering algorithm to partition the feature space into subspaces, then design a decision boundary estimation mechanism to approximate the source model in each subspace. The Model Interpreter interprets predictions by important attributes to aid network operators in understanding the predictions. The Model Debugger conducts incremental updating to rectify errors by only fine-tuning rules on affected subspaces, thus reducing maintenance costs. We implement a prototype using physical hardware, and experiments demonstrate its superior performance of 100 Gbps throughput, great interpretability, and trivial updating overhead.
Speaker Ruoyu Li (Tsinghua University)

Ruoyu Li is a Ph.D. candidate at Tsinghua University, majoring in computer science and technology. Before that, he received a B.S. degree in information security from Huazhong University of Science and Technology, Wuhan, China, in 2017, and an M.S. degree in computer science from Columbia University, New York, USA, in 2019. He is also a research intern with Peng Cheng Laboratory, Shenzhen, China. His research interests mainly include intrusion/anomaly detection systems, Internet of Things security, programmable networking, and explainable/trustworthy AI.


SPIDER: A Semi-Supervised Continual Learning-based Network Intrusion Detection System

Suresh Kumar Amalapuram and Sumohana Channappayya (Indian Institute of Technology Hyderabad, India); Bheemarjuna Reddy Tamma (IIT Hyderabad, India)

0
Network intrusion detection (NID) aims to identify unusual network traffic patterns (distribution shifts) that require NID systems to evolve continuously. While prior art emphasizes fully supervised annotated data-intensive continual learning methods for NID, semi-supervised continual learning (SSCL) methods require only limited annotated data. However, the inherent class imbalance (CI) in network traffic can significantly impact the performance of SSCL approaches. Previous approaches to tackle CI issues require storing a subset of labeled training samples from all past tasks in the memory for an extended duration, potentially raising privacy concerns. The proposed SPIDER (Semisupervised Privacy-preserving Intrusion Detection with Drift-aware Continual Learning) is a novel method that combines gradient projection memory with SSCL to handle CI effectively without storing labeled samples from all of the previous tasks. We assess SPIDER's performance against baselines on six intrusion detection benchmarks formed over a short period and the Anoshift benchmark spanning ten years, which includes natural distribution shifts. Additionally, we validate our approach on standard continual learning image classification benchmarks known for frequent distribution shifts compared to NID benchmarks. SPIDER achieves comparable performance to baseline (fully supervised, semisupervised) methods, utilizes a maximum of 20% annotated data while reducing the total training time by 2X.
Speaker
Speaker biography is not available.

AOC-IDS: Autonomous Online Framework with Contrastive Learning for Intrusion Detection

Xinchen Zhang and Running Zhao (The University of Hong Kong, Hong Kong); Zhihan Jiang (The University of Hong Kong, China); Zhicong Sun (The Hong Kong Polytechnic University, Hong Kong); Yulong Ding (Southern University of Science and Technology, China); Edith C.-H. Ngai (The University of Hong Kong & Uppsala University, Hong Kong); Shuang-Hua Yang (Southern University of Science and Technology, China)

0
The rapid expansion of the Internet of Things (IoT) has raised increasing concern about targeted cyber attacks. Previous research primarily focused on static Intrusion Detection Systems (IDSs), which employ offline training to safeguard IoT systems. However, such static IDSs struggle with real-world scenarios where IoT system behaviors and attack strategies can undergo rapid evolution, necessitating dynamic and adaptable IDSs. In response to this challenge, we propose AOC-IDS, a novel online IDS that features an autonomous anomaly detection module (ADM) and a labor-free online framework for continual adaptation. In order to enhance data comprehension, the ADM employs an Autoencoder (AE) with a tailored Cluster Repelling Contrastive (CRC) loss function to generate distinctive representation from limited or incrementally incoming data in the online setting. Moreover, to reduce the burden of manual labeling, our online framework leverages pseudo-labels automatically generated from the decision-making process in the ADM to facilitate periodic updates of the ADM. The elimination of human intervention for labeling and decision-making boosts the system's compatibility and adaptability in the online setting to remain synchronized with dynamic environments. Experimental validation using the NSL-KDD and UNSW-NB15 datasets demonstrates the superior performance and adaptability of AOC-IDS, surpassing the state-of-the-art solutions.
Speaker Ke Wang
Speaker biography is not available.

RIDS: Towards Advanced IDS via RNN Model and Programmable Switches Co-Designed Approaches

Ziming Zhao (Zhejiang University, China); Zhaoxuan Li (Institute of Information Engineering Chinese Academy of Sciences, China); Zhuoxue Song and Fan Zhang (Zhejiang University, China); Binbin Chen (Singapore University of Technology and Design, Singapore)

0
Existing Deep Learning (DL) based Intrusion Detection System (IDS) is able to characterize sequence semantics of traffic and discover malicious behaviors. Yet DL models are often nonlinear and highly non-convex functions that are difficult for in-network deployment. In this paper, we present RIDS, a hardware-friendly Recurrent Neural Network (RNN) model that is co-designed with programmable switches. As its core, RIDS is powered by two tightly-coupled components: (i) rLearner, the RNN learning module with in-network deployability as the first-class requirement; and (ii) rEnforcer, the concrete dataplane design to realize rLearner -generated models inside the network dataplane. We implement a prototype of RIDS and evaluate it on our physical testbed. The experiments show that RIDS could satisfy both detection performance and high-speed bandwidth adaptation simultaneously, when none of the other compared approaches could do so. Inspiringly, RIDS realizes remarkable intrusion/malware detection effect (e.g., ∼99% F1 score) and model deployment (e.g., 100 Gbps per port), while only imposing nanoseconds of latency.
Speaker
Speaker biography is not available.

Session Chair

Tamer Nadeem (Virginia Commonwealth University, USA)

Enter Zoom
Session D-3

D-3: Federated Learning 2

Conference
4:00 PM — 5:30 PM PDT
Local
May 21 Tue, 6:00 PM — 7:30 PM CDT
Location
Regency E

Fed-CVLC: Compressing Federated Learning Communications with Variable-Length Codes

Xiaoxin Su (Shenzhen University, China); Yipeng Zhou (Macquarie University, Australia); Laizhong Cui (Shenzhen University, China); John C.S. Lui (The Chinese University of Hong Kong, Hong Kong); Jiangchuan Liu (Simon Fraser University, Canada)

0
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds, without touching private data owned by individual clients. FL is appealing in preserving data privacy; yet the communication between the PS and scattered clients can be a severe bottleneck. Model compression algorithms, such as quantization and sparsification, have been suggested but they generally assume a fixed code length, which does not reflect the heterogeneity and variability of model updates. In this paper, through both analysis and experiments, we show strong evidences that variable-length is beneficial for compression in FL. We accordingly present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response of the dynamics of model updates. We develop optimal tuning strategies that minimizes the loss function (equivalent to maximizing the model utility) subject to the budget for communication. We further demonstrate that Fed-CVLC is indeed a general compression design that bridges quantization and sparsification, with greater flexibility. Extensive experiments have been conducted with public datasets to demonstrate that Fed-CVLC remarkably outperforms state-of-the-art baselines, improving model utility by 1.50%-5.44%, or shrinking communication traffic by 16.67%-41.61%.
Speaker
Speaker biography is not available.

Titanic: Towards Production Federated Learning with Large Language Models

Ningxin Su, Chenghao Hu and Baochun Li (University of Toronto, Canada); Bo Li (Hong Kong University of Science and Technology, Hong Kong)

0
With the recent surge of research interests in Large Language Models (LLMs), a natural question that arises is how pre-trained LLMs can be fine-tuned to tailor to specific needs of enterprises and individual users, while preserving the privacy of data used in the fine-tuning process. On the one hand, sending private data to cloud datacenters for fine-tuning is, without a doubt, unacceptable from a privacy perspective. On the other hand, conventional federated learning requires each client to perform local training, which is not feasible for LLMs with respect to both computation costs and communication overhead. In this paper, we present Titanic, a new systems-oriented framework that allows LLMs to be fine-tuned in a privacy-preserving fashion directly on the client devices where private data is produced, while operating within the resource constraints on computation and communication bandwidth. Titanic first optimally selects a subset of clients with an efficient solution to an integer optimization problem, then partitions an LLM across multiple client devices, and finally fine-tunes the model with no or minimal losses in training performance. Our experimental results show that Titanic achieves superior training performance as compared to conventional federated learning, while preserving data privacy.
Speaker Ningxin Su (University of Toronto)

Ningxin Su is a fourth-year Ph.D. student in the Department of Electrical and Computer Engineering, University of Toronto, under the supervision of Prof. Baochun Li. She received her M.E. and B.E. degrees from the University of Sheffield and Beijing University of Posts and Telecommunications in 2020 and 2019, respectively. Her research area includes distributed machine learning, federated learning and networking. Her website is located at ningxinsu.github.io.


FairFed: Improving Fairness and Efficiency of Contribution Evaluation in Federated Learning via Cooperative Shapley Value

Yiqi Liu, Shan Chang and Ye Liu (Donghua University, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong); Cong Wang (City University of Hong Kong, Hong Kong)

0
The quality of federated learning (FL) is highly correlated with the number and quality of the participants involved. It is essential to design proper contribution evaluation mechanisms. Shapley Value (SV)-based techniques have been widely used to provide fair contribution evaluation. Existing approaches, however, donot support dynamic participants (e.g., joining and departure) and incur significant computation costs, making them difficult to apply in practice. Worse, participants may be incorrectly valued as negative contribution under the non-IID data scenarios, further jeopardizing fairness. In this work, we propose FairFed to address above challenges. First, given that each iteration is of equal importance, FairFed treats FL as multiple Single-stage Cooperative Games, and evaluates participants by each iteration for effectively coping with dynamic participants and ensuring fairness across iterations. Second, we introduce Cooperative Shapley Value (CSV) to rectify negative values of participants for improving the fairness while preserving true negative values. Third, we prove if participants are Strategically Equivalent, the number of participant combinations can be sharply reduced from exponential to polynomial, thus significantly reducing computational complexity of CSV. Experimental results show that FairFed achieves up to 25.3x speedup and reduces deviations by three orders of magnitude to two state-of-the-art approximation approaches.
Speaker Yiqi Liu (Donghua University)



Federated Learning While Providing Model as a Service: Joint Training and Inference Optimization

Pengchao Han (Guangdong University of Technology, China); Shiqiang Wang (IBM T. J. Watson Research Center, USA); Yang Jiao (Tongji University, China); Jianwei Huang (The Chinese University of Hong Kong, Shenzhen, China)

0
While providing machine learning model as a service to process users' inference requests, online applications can periodically upgrade the model utilizing newly collected data. Federated learning (FL) is beneficial for enabling the training of models across distributed clients while keeping the data locally. However, existing work has overlooked the coexistence of model training and inference under clients' limited resources. This paper focuses on the joint optimization of model training and inference to maximize inference performance at clients. Such an optimization faces several challenges. The first challenge is to characterize the clients' inference performance when clients may partially participate in FL. To resolve this challenge, we introduce a new notion of age of model (AoM) to quantify client-side model freshness, based on which we use FL's global model convergence error as an approximate measure of inference performance. The second challenge is the tight coupling among clients' decisions, including participation probability in FL, model download probability, and service rates. Toward the challenges, we propose an online problem approximation to reduce the problem complexity and optimize the resources to balance the needs of model training and inference. Experimental results demonstrate that the proposed algorithm improves the average inference accuracy by up to 12%.
Speaker
Speaker biography is not available.

Session Chair

Christopher G. Brinton (Purdue University, USA)

Enter Zoom
Session E-3

E-3: Scheduling 2

Conference
4:00 PM — 5:30 PM PDT
Local
May 21 Tue, 6:00 PM — 7:30 PM CDT
Location
Regency F

Monitoring Correlated Sources: AoI-based Scheduling is Nearly Optimal

Rudrapatna Vallabh Ramakanth, Vishrant Tripathi and Eytan Modiano (MIT, USA)

2
We study the design of scheduling policies to minimize monitoring error for a collection of correlated sources when only one source can be observed at any given time. We model correlates sources as a discrete-time Wiener process, where the increments are multivariate normal random variables, with a general covariance matrix that captures the correlation structure between the sources. Under a Kalman filter-based optimal estimation framework, we show that the performance of all scheduling policies that are oblivious to instantaneous error can be lower and upper bounded by the weighted sum of Age of Information (AoI) across the sources for appropriately chosen weights. We use this insight to design scheduling policies that are only a constant factor away from optimality and make the rather surprising observation that AoI-based scheduling that ignores correlation is sufficient to obtain good performance guarantees. We also derive scaling results that show no order improvement in error performance due to correlation in our model, irrespective of the degree of correlation or scheduling policy chosen. Finally, we provide simulation results to verify our claims.
Speaker
Speaker biography is not available.

Scheduling Stochastic Traffic With End-to-End Deadlines in Multi-hop Wireless Networks

Christos Tsanikidis and Javad Ghaderi (Columbia University, USA)

0
Scheduling deadline-constrained packets in multihop networks has received increased attention recently. However, there is very limited work on this problem for wireless networks where links are subject to interference. The existing algorithms either provide approximation ratio guarantees which diminish in quality as parameters of the network scale, or hold in an asymptotic regime when the time horizon, network bandwidth, and packet arrival rates are scaled to infinity, which limits their practicality. While attaining a constant approximation ratio has been shown to be impossible in the worst-case traffic setting, it is unclear if the same holds under the stochastic traffic, in a non-asymptotic setting. In this work, we show that, in the stochastic traffic setting, constant approximation ratio or near-optimal algorithms can be achieved. Specifically, we propose algorithms that attain Ω((1 − ε)/β) or Ω(1 − ε) fraction of the optimal value, when the number of channels is C=Ω(log(L/ε)/ε^2) or C=Ω(χlog(L/ε)/ε^3) respectively, where L is the maximum route length of packets, χ is the fractional chromatic number of the network's interference graph, and β is its interference degree. This marks the first near-optimal results under nontrivial traffic and bandwidth assumptions in a non-asymptotic regime.
Speaker
Speaker biography is not available.

Train Once Apply Anywhere: Effective Scheduling for Network Function Chains Running on FUMES

Marcel Blöcher (SAP SE & TU Darmstadt, Germany); Nils Nedderhut (Vivenu & TU Darmstadt, Germany); Pavel Chuprikov (Università della Svizzera Italiana, Switzerland); Ramin Khalili (Huawei Technologies, Germany); Patrick Eugster (Università Della Svizzera Italiana (USI), Switzerland); Lin Wang (Paderborn University, Germany)

0
The emergence of network function virtualization has enabled network function chaining as a flexible approach for building complex network services. However, the high degree of flexibility envisioned for orchestrating network function chains introduces several challenges to support dynamism in workloads and the environment necessary for their realization. Existing works mostly consider supporting dynamism by re-adjusting provisioning of network function instances, incurring reaction times that are prohibitively high in practice. Existing solutions to dynamic packet scheduling rely on centralized schedulers and a priori knowledge of traffic characteristics, and cannot handle changes in the environment like link failures.
We fill this gap by presenting FUMES, a reinforcement learning based distributed agent design for the runtime scheduling problem of assigning packets undergoing treatment by network function chains to network function instances. Our system design consists of multiple distributed agents that cooperatively work on the scheduling problem. A key design choice enables agents, once trained, to be applicable for unknown chains and traffic patterns including branching, and different environments inlcuding link failures. The paper presents the system design and shows its suitability for realistic deployments. We empirically compare FUMES with state-of-the-art runtime scheduling solutions showing improved scheduling decisions at lower server capacity.
Speaker Marcel Blöcher (SAP & TU Darmstadt)

Marcel Blöcher is currently an architect at SAP working on resource scheduling of SAP’s own data centers. He received his Ph.D. from TU Darmstadt (Germany) in 2021. His research interests is on a broad range of resources scheduling problems. 


EdgeTimer: Adaptive Multi-Timescale Scheduling in Mobile Edge Computing with Deep Reinforcement Learning

Yijun Hao, Shusen Yang, Fang Li, Yifan Zhang, Shibo Wang and Xuebin Ren (Xi'an Jiaotong University, China)

0
In mobile edge computing (MEC), resource scheduling is crucial to task requests' performance and service providers' cost, involving multi-layer heterogeneous scheduling decisions. Existing schedulers typically adopt static timescales to regularly update scheduling decisions of each layer, without adaptive adjustment of timescales for different layers, resulting in potentially poor performance in practice.
We notice that the adaptive timescales would significantly improve the trade-off between the operation cost and delay performance. Based on this insight, we propose EdgeTimer, the first work to automatically generate adaptive timescales to update multi-layer scheduling decisions using deep reinforcement learning (DRL). First, EdgeTimer uses a three-layer hierarchical DRL framework to decouple the multi-layer decision-making task into a hierarchy of independent sub-tasks for improving learning efficiency. Second, to cope with each sub-task, EdgeTimer adopts a safe multi-agent DRL algorithm for decentralized scheduling while ensuring system reliability. We apply EdgeTimer to a wide range of Kubernetes scheduling rules, and evaluate it using production traces with different workload patterns. Extensive trace-driven experiments demonstrate that EdgeTimer can learn adaptive timescales, irrespective of workload patterns and built-in scheduling rules. It obtains up to 9.1x more profit than existing approaches without sacrificing the delay performance.
Speaker
Speaker biography is not available.

Session Chair

Alex Sprintson (Texas A&M University, USA)

Enter Zoom
Session F-3

F-3: Network Security 3

Conference
4:00 PM — 5:30 PM PDT
Local
May 21 Tue, 6:00 PM — 7:30 PM CDT
Location
Prince of Wales/Oxford

Periscoping: Private Key Distribution for Large-Scale Mixnets

Shuhao Liu (Shenzhen Institute of Computing Sciences, China); Li Chen (University of Louisiana at Lafayette, USA); Yuanzhong Fu (Unaffiliated, China)

0
Mix networks, or mixnets, are one of the fundamental building blocks of anonymity systems. To defend against epistemic attacks, existing free-route mixnet designs require all clients to maintain a consistent, up-to-date view of the entire key directory. This, however, inevitably raises the performance concern under system scale-out: in a larger mixnet, a client will consume more bandwidth for updating keys in the background.

This paper presents Periscoping, a key distribution protocol for mixnets at scale. Periscoping relaxes the download-all requirement for clients. Instead, it allows a client to selectively download a constant number of entries of the key directory, while guaranteeing the privacy of selections. Periscoping achieves this goal via a novel Private Information Retrieval scheme, constructed based on constrained Pseudorandom Functions. Moreover, the protocol is integrated seamlessly into the mixnet operations, readily applicable to existing mixnet systems as an extension at a minimal cost. Our experiments show that, with millions of mixes, it can reduce the traffic load of a mixnet by orders of magnitude, at a minor computational and bandwidth overhead.
Speaker Shuhao Liu (Shenzhen Institute of Computing Sciences)



Detecting Adversarial Spectrum Attacks via Distance to Decision Boundary Statistics

Wenwei Zhao and Xiaowen Li (University of South Florida, USA); Shangqing Zhao (University of Oklahoma, USA); Jie Xu (University of Miami, USA); Yao Liu and Zhuo Lu (University of South Florida, USA)

0
Machine learning has been adopted for efficient cooperative spectrum sensing. However, it incurs an additional security risk due to attacks leveraging adversarial machine learning to create malicious spectrum sensing values to deceive the fusion center, called adversarial spectrum attacks. In this paper, we propose an efficient framework for detecting adversarial spectrum attacks. Our design leverages the concept of the distance to the decision boundary (DDB) observed at the fusion center and compares the training and testing DDB distributions to identify adversarial spectrum attacks. We create a computationally efficient way to compute the DDB for machine learning based spectrum sensing systems. Experimental results based on realistic spectrum data show that our method, under typical settings, achieves a high detection rate of up to 99% and maintains a low false alarm rate of less than 1%. In addition, our method to compute the DDB based on spectrum data achieves 54%--64% improvements in computational efficiency over existing distance calculation methods. The proposed DDB-based detection framework offers a practical and efficient solution for identifying malicious sensing values created by adversarial spectrum attacks.
Speaker
Speaker biography is not available.

RF-Parrot: Wireless Eavesdropping on Wired Audio

Yanni Yang and Genglin Wang (Shandong University, China); Zhenlin An (Princeton University, USA); Pengfei Hu, Xiuzhen Cheng and Guoming Zhang (Shandong University, China)

0
Recent works demonstrated that we can eavesdrop on audio by using radio frequency (RF) signals or videos to capture the physical surface vibrations of surrounding objects. They fall short when it comes to intercepting internally transmitted audio through wires. In this work, we first address this gap by proposing a new eavesdropping system, RF-Parrot, that can wirelessly capture the audio signal transmitted in earphone wires. Our system involves embedding a tiny field-effect transistor in the wire to create a battery-free retroreflector, with its reflective efficiency tied to the audio signal's amplitude. To capture full details of the analog audio signals, we engineered a novel retroreflector using a depletion-mode MOSFET, which can be activated by any voltage of the audio signals, ensuring no information loss. We also developed a theoretical model to demystify the nonlinear transmission of the retroreflector, identifying it as a convolution operation on the audio spectrum. Subsequently, we have designed a novel convolutional neural network-based model to accurately reconstruct the original audio. Our extensive experimental results demonstrate that the reconstructed audio bears a strong resemblance to the original audio, achieving an impressive 95% accuracy in speech command recognition.
Speaker
Speaker biography is not available.

BlueKey: Exploiting Bluetooth Low Energy for Enhanced Physical-Layer Key Generation

Yawen Zheng and Fan Dang (Tsinghua University, China); Zihao Yang (Yanshan University, China); Jinyan Jiang and Wang Xu (Tsinghua University, China); Lin Wang (Yanshan University, China); Kebin Liu and Xinlei Chen (Tsinghua University, China); Yunhao Liu (Tsinghua University & The Hong Kong University of Science and Technology, China)

0
Bluetooth Low Energy (BLE) is a prevalent technology in various applications due to its low power consumption and wide device compatibility. Despite its numerous advantages, the encryption methods of BLE often expose devices to potential attacks. To fortify security, we investigate the application of Physical-layer Key Generation (PKG), a promising technology that enables devices to generate a shared secret key from their shared physical environment. We propose a distinctive approach that capitalizes on the inherent characteristics of BLE to facilitate efficient PKG. We harness the constant tone extension within BLE protocols to extract comprehensive physical layer information and introduce an innovative method that employs Legendre polynomial quantization for PKG. This method facilitates the exchange of secret keys with a high key matching rate and a high key generation rate. The efficacy of our approach is validated through extensive experiments on a software-defined radio platform, underscoring its potential to enhance security in the rapidly expanding field of BLE applications.
Speaker Yawen Zheng (Tsinghua University)



Session Chair

Pradeeban Kathiravelu (University of Alaska Anchorage, USA)

Enter Zoom
Session Reception

Welcome Reception (for Registered Attendees)

Conference
6:00 PM — 8:30 PM PDT
Local
May 21 Tue, 8:00 PM — 10:30 PM CDT
Location
34th Floor

Enter Zoom


Gold Sponsor


Gold Sponsor


Student Travel Grants


Student Travel Grants


Student Travel Grants

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · INFOCOM 2023 · © 2024 Duetone Corp.