Session A-7

ML Security

Conference
8:30 AM — 10:00 AM EDT
Local
May 19 Fri, 8:30 AM — 10:00 AM EDT
Location
Babbio 122

Mixup Training for Generative Models to Defend Membership Inference Attacks

Zhe Ji, Qiansiqi Hu and Liyao Xiang (Shanghai Jiao Tong University, China); Chenghu Zhou (Chinese Academy of Sciences, China)

0
With the popularity of machine learning, it has been a growing concern on the trained model revealing the private information of the training data. Membership inference attack (MIA) poses one of the threats by inferring whether a given sample participates in the training of the target model. Although MIA has been intensively studied for discriminative models, it has been seldom investigated for generative models, nor its defense. In this work, we propose a mixup training method for generative adversarial networks (GANs) as a defense against MIAs. Specifically, the original training data is replaced with their interpolations so that GANs would never overfit the original data. The intriguing part is an analysis from the hypothesis test perspective to theoretically prove our method could mitigate the AUC of the strongest likelihood ratio attack. Experimental results support that mixup training successfully defends the state-of-the-art MIAs for generative models, yet without model performance degradation or any additional training efforts, showing great promise to be deployed in practice.
Speaker Zhe Ji (Shanghai Jiao Tong University)

Zhe Ji is a master student at Shanghai Jiao Tong University. He graduated from Shanghai Jiao Tong University with a bachelor's degree in computer science and technology. His current research interests mainly focus on privacy issues in machine learning.


Spotting Deep Neural Network Vulnerabilities in Mobile Traffic Forecasting with an Explainable AI Lens

Serly Moghadas (IMDEA Networks, Spain); Claudio Fiandrino and Alan Collet (IMDEA Networks Institute, Spain); Giulia Attanasio (IMDEA Networks, Spain); Marco Fiore and Joerg Widmer (IMDEA Networks Institute, Spain)

0
The ability to forecast mobile traffic patterns at large is key to resource management for mobile network operators and local authorities. Several Deep Neural Networks (DNN) have been designed to capture the complex spatio-temporal characteristics of mobile traffic patterns at scale. These models are complex black boxes whose decisions are inherently hard to explain. Even worse, they have been proven vulnerable
to adversarial attacks which undermine their applicability in production networks. In this paper, we conduct a first in-depth study of the vulnerabilities of DNNs for large-scale mobile traffic forecasting. We propose DeExp, a new tool that leverages EXplainable Artificial Intelligence (XAI) to understand which Base Stations (BSs) are more influential for forecasting from a spatio-temporal perspective. This is challenging as existing XAI techniques are usually applied to computer vision or natural language processing and need to be adapted to the mobile network context. Upon identifying the more influential BSs, we run state-of-the art Adversarial Machine Learning (AML) techniques on those BSs and measure the accuracy degradation of the predictors. Extensive evaluations with real-world mobile traffic traces pinpoint that attacking BSs relevant to the predictor significantly degrades its accuracy across all the scenarios.
Speaker Claudio Fiandrino (IMDEA Networks Institute)

Claudio Fiandrino is a senior researcher at IMDEA Networks Institute. He obtained his Ph.D. degree at the University of Luxembourg in 2016. Claudio has received numerous awards for his research, including a Fulbright scholarship in 2022, a 5-year long Spanish Juan de la Cierva grants and several Best Paper Awards. He is member of IEEE and ACM, serves in the Technical Program Committee (TPC) of several international IEEE and ACM conferences and regularly participates in the organization of events. Claudio is member of the Editorial Board of IEEE Networking Letters and Chair of the IEEE ComSoc EMEA Awards Committee. His primary research interests include explainable and robust AI for mobile networks, next generation mobile networks, and multi-access edge computing.


FeatureSpy: Detecting Learning-Content Attacks via Feature Inspection in Secure Deduplicated Storage

Jingwei Li (University of Electronic Science and Technology of China, China); Yanjing Ren and Patrick Pak-Ching Lee (The Chinese University of Hong Kong, Hong Kong); Yuyu Wang (University of Electronic Science and Technology of China, China); Ting Chen (University of Electronic Science and Technology of China (UESTC), China); Xiaosong Zhang (University of Electronic Science and Technology of China, China)

1
Secure deduplicated storage is a critical paradigm for cloud storage outsourcing to achieve both operational cost savings (via deduplication) and outsourced data confidentiality (via encryption). However, existing secure deduplicated storage designs are vulnerable to learning-content attacks, in which malicious clients can infer the sensitive contents of outsourced data by monitoring the deduplication pattern. We show via a simple case study that learning-content attacks are indeed feasible and can infer sensitive information in short time under a real cloud setting. To this end, we present FeatureSpy, a secure deduplicated storage system that effectively detects learning-content attacks based on the observation that such attacks often generate a large volume of similar data. FeatureSpy builds on two core design elements, namely (i) similarity-preserving encryption that supports similarity detection on encrypted chunks and (ii) shielded attack detection that leverages Intel SGX to accurately detect learning-content attacks without being readily evaded by adversaries. Trace-driven experiments on real-world and synthetic datasets show that our FeatureSpy prototype achieves high accuracy and low performance overhead in attack detection.
Speaker Patrick P. C. Lee (The Chinese University of Hong Kong)

Patrick Lee is now a Professor of the Department of Computer Science and Engineering at the Chinese University of Hong Kong. His research interests are in storage systems, distributed systems and networks, and cloud computing.


Fast Generation-Based Gradient Leakage Attacks against Highly Compressed Gradients

Dongyun Xue, Haomiao Yang, Mengyu Ge and Jingwei Li (University of Electronic Science and Technology of China, China); Guowen Xu (Nanyang Technological University, Singapore); Hongwei Li (University of Electronic Science and Technology of China, China)

1
Federated learning (FL) is a distributed machine learning technology that preserves data privacy. However, recent gradient leakage attacks (GLA) can reconstruct private training data from public gradients. Nevertheless, these attacks either require modification of the FL model (analytics-based) or take a long time to converge (optimization-based) and fail in dealing with highly compressed gradients in practical FL systems. In this paper, we pioneer a generation-based GLA method called FGLA that can reconstruct batches of user data, forgoing the optimization process. Specifically, we design a feature separation technique that extracts the feature of each data in a batch and then generates user data directly. Extensive experiments on multiple image datasets demonstrate that FGLA can reconstruct user images in milliseconds with a batch size of 512 from highly compressed gradients (0.8\% compression ratio or higher), thus substantially outperforming state-of-the-art methods.
Speaker Dongyun Xue

Dongyun Xue is a graduate student at the University of Electronic Science and Technology of China, with a major research focus on artificial intelligence security.


Session Chair

Qiben Yan

Session B-7

PHY Networking

Conference
8:30 AM — 10:00 AM EDT
Local
May 19 Fri, 8:30 AM — 10:00 AM EDT
Location
Babbio 104

Transfer Beamforming via Beamforming for Transfer

Xueyuan Yang, Zhenlin An and Xiaopeng Zhao (The Hong Kong Polytechnic University, Hong Kong); Lei Yang (The Hong Kong Polytechnic University, China)

0
Although billions of battery-free backscatter devices (e.g., RFID tags) are intensively deployed nowadays, they are still unsatisfying in performance limitations (i.e., short reading range and high miss-reading rate) resulting from power harvesting inefficiency. However, applying classic beamforming technique to backscatter systems meets the deadlock start problem, i.e., without enough power, the backscatter cannot wake up to provide channel parameters; but, without channel parameters, the system cannot form beams to provide power. In this work, we propose a new beamforming paradigm called transfer beamforming (TBF), namely, beamforming strategies can be transferred from reference tags with known positions to power up unknown neighbor tags of interest. Transfer beamforming (is accomplished) via (launching) beamforming (to reference tags firstly) for (the purpose of) transfer. To do so, we adopt semi-active tags as reference tags, which can be easily powered up with a normal reader. Then beamforming is initiated and transferred to power up passive tags surrounded by reference tags. A prototype evaluation of TBF with 8 antennas presents a 99.9% inventory coverage rate in a crowded warehouse with 2,160 RFID tags. Our evaluation reveals that TBF improves the power transmission by 6.9 dB and boosts the inventory speed by 2× compared with state-of-art methods.
Speaker Xueyuan Yang (Hong Kong Polytechnic University)

My research interest is beamforming and the Internet of Things.


Prism: High-throughput LoRa Backscatter with Non-linear Chirps

Yidong Ren and Puyu Cai (Michigan State University, USA); Jinyan Jiang and Jialuo Du (Tsinghua University, China); Zhichao Cao (Michigan State University, USA)

1
LoRa backscatter is known as its low-power and long-range communication. In addition, concurrent transmissions from LoRa backscatter device are desirable to enable large- scale backsatter networks. However, linear chirp based LoRa signals easily interfere with each other degrading the throughput of concurrent backscatter. In this paper, we propose Prism that utilizes different types of non-linear chirps to represent the backscattered data allowing multiple backscatter devices to transmit concurrently in the same channel. By taking the commercial-off-the-shelf (COTS) LoRa linear chirps as excitation source, how to convert a linear chirp to its non-linear counterpart is not trivial on resource-limited backscatter devices. To mitigate this challenge, we design the delicate error function and control the timer to trigger accurate frequency shift. We implement Prism with customized low-cost hardware, processing the signal with USRP and evaluate its performance in both indoor and outdoor environments. The measurement results and emulation data show that Prism achieves the highest 560kps throughput and supports 40 Prism tags transmit concurrently with 1% bit error rate in the same physical channel which is 40× of state-of-the-art.
Speaker Yidong Ren (Michigan State University)

Yidong Ren is a second year PhD student in Michigan State University.


CSI-StripeFormer: Exploiting Stripe Features for CSI Compression in Massive MIMO System

Qingyong Hu (Hong Kong University of Science and Technology, Hong Kong); Hua Kang (HKUST, Hong Kong); Huangxun Chen (Huawei, Hong Kong); Qianyi Huang (Southern University of Science and Technology & Peng Cheng Laboratory, China); Qian Zhang (Hong Kong University of Science and Technology, Hong Kong); Min Cheng (Noah's Ark Lab, Huawei, Hong Kong)

1
The massive MIMO gain for wireless communication has been greatly hindered by the feedback overhead of channel state information (CSI) growing linearly with the number of antennas. Recent efforts leverage DNN-based encoder-decoder framework to exploit correlations within CSI matrix for better CSI compression. However, existing works did not fully exploit unique features of CSI, resulting in unsatisfying performance under high compression ratios and being sensitive to multipath effects. Instead of treating CSI as ordinary 2D matrices like images, we reveal the intrinsic stripe-based correlation across CSI matrix. Driven by this insight, we propose CSI-StripeFormer, a stripe-aware encoder-decoder framework to exploit the unique stripe feature for better CSI compression. We design a lightweight encoder with asymmetric convolution kernels to capture various shape features. We further incorporate novel designs tailored for stripe features, including a novel hierarchical Transformer backbone in the decoder and a hybrid attention mechanism to extract and fuse correlations in angular and delay domains. Our evaluation results show that our system achieves over 7dB channel reconstruction gain under a high compression ratio of 64 in multipath-rich scenarios, significantly superior to state-of-the-art approaches. This gain can be further improved to 17dB given the extended embedded dimension of our backbone.
Speaker Qingyong Hu (Hong Kong University of Science and Technology)

Qingyong Hu is a PhD Student at Hong Kong University of Science and Technology. He is currently working on bringing artificial intelligence into IoT world, such as optimizing IoT systems with advanced algorithms and developing novel sensing systems. His research interests include but not limited to AIoT, smart healthcare and system optimization.



RIS-STAR: RIS-based Spatio-Temporal Channel Hardening for Single-Antenna Receivers

Sara Garcia Sanchez and Kubra Alemdar (Northeastern University, USA); Vini Chaudhary (Northeastern University, Boston, MA, US, USA); Kaushik Chowdhury (Northeastern University, USA)

0
Small form-factor single antenna devices, typically deployed within wireless sensor networks, lack many benefits of multi-antenna receivers like leveraging spatial diversity to enhance signal reception reliability. In this paper, we introduce the theory of achieving spatial diversity in such single-antenna systems by using reconfigurable intelligent surfaces (RIS). Our approach, called 'RIS-STAR', proposes a method of proactively perturbing the wireless propagation environment multiple times within the symbol time (that is less than the channel coherence time) through reconfiguring an RIS. By leveraging the stationarity of the channel, RIS-STAR ensures that the only source of perturbation is due to the chosen and control-able RIS configuration. We first formulate the problem to find the set of RIS configurations that maximizes channel hardening, which is a measure of link reliability. Our solution is independent of the transceiver's relative location with respect to the RIS and does not require channel estimation, alleviating two key implementation concerns. We then evaluate the performance of RIS-STAR using a custom-simulator and an experimental testbed composed of PCB-fabricated RIS. Specifically, we demonstrate how a SISO link can be enhanced to perform similar to a SIMO link attaining an 84.6% channel hardening improvement in presence of strong multipath and non-line-of-sight conditions.
Speaker Northeastern University

Sara Garcia Sanchez received the B.S. and M.S. degrees in Electrical Engineering from Universidad Politecnica de Madrid in 2016 and 2018, respectively, and the Ph.D. in Computer Engineering from Northeastern University, Boston, MA, in 2022. She currently holds a position as Research Scientist at the IBM Thomas J. Watson Research Center, NY. Her research interests include mmWave communications, reconfigurable intelligent surfaces and 5G standards.


Session Chair

Parth Pathak

Session C-7

Wireless Charging

Conference
8:30 AM — 10:00 AM EDT
Local
May 19 Fri, 8:30 AM — 10:00 AM EDT
Location
Babbio 202

Roland: Robust In-band Parallel Communication for Magnetic MIMO Wireless Power Transfer System

Wangqiu Zhou, Hao Zhou, Xiang Cui and Xinyu Wang (University of Science and Technology of China, China); Xiaoyan Wang (Ibaraki University, Japan); Zhi Liu (The University of Electro-Communications, Japan)

0
In recent years, receiver (RX) feedback communication has attracted increasing attention to enhance the charging performance for magnetic resonant coupling (MRC) based wireless power transfer (WPT) systems. People prefer to adopt the in-band implementation with minimal overhead costs. However, the influence of RX-RX coupling couldn't be directly ignored like that in the RFID field, i.e., strong couplings and relay phenomenon. In order to solve these two critical issues, we propose a Robust layer-level in-band parallel communication protocol for MIMO MRC-WPT systems (called Roland). Technically, we first utilize the observed channel decomposability to construct group-level channel relationship graph for eliminating the interference caused by strong RX-RX couplings. Then, we generalize such method to deal with the RX dependency due to relay phenomenon. Finally, we conduct extensive experiments on a prototype testbed to evaluate the effectiveness of the proposed scheme. The results demonstrate that our Roland could provide ≥95% average decoding accuracy for concurrent feedback communication of 14 devices. Compared with the state-of-the-art solution, the proposed protocol Roland can achieve an average decoding accuracy improvement of 20.41%.
Speaker Hao Zhou (University of Science and Technology of China)

Hao Zhou (Member, IEEE) received the BS and PhD degrees in computer science from the University of Science and Technology of China, Hefei, China, in 1997 and 2002, respectively. From 2014 to 2016, he was a project lecturer with the National Institute of Informatics (NII), Japan, and currently he is an associate professor with the University of Science and Technology of China, Hefei, China. His research interests include Internet of Things, wireless communication, and software engineering.


Concurrent Charging with Wave Interference

Yuzhuo Ma, Dié Wu and Meixuan Ren (Sichuan Normal University, China); Jian Peng (Sichuan University, China); Jilin Yang and Tang Liu (Sichuan Normal University, China)

0
To improve the charging performance, employing multiple wireless chargers to charge sensors concurrently is an effective way. In such charging scenarios, the radio waves radiated from multiple chargers will interfere with each other. Though a few work have realized the wave interference, they do not fully utilize the high power caused by constructive interference while avoiding the negative impacts brought by the destructive interference. In this paper, we aim to investigate the power distribution regularity of concurrent charging and take full advantage of the high power to enhance the charging efficiency. Specifically, we formulate a concurrent charGing utility mAxImizatioN (GAIN) problem and build a practical charging model with wave interference. Further, we propose a concurrent charging scheme, which not only can improve the power of interference enhanced regions by deploying chargers, but also find a set of points with the highest power to locate sensors. Finally, we conduct both simulations and field experiments to evaluate the proposed scheme. The results demonstrate that our scheme outperforms the comparison algorithms by 40.48% on average.
Speaker Mazhuo Yu (Sichuan Normal University)

Yuzhuo Ma received the BS degree in mechanical engineering from Soochow University, Suzhou, China, in 2019. She is studying towards the MS degree in the College of Computer Science, Sichuan Normal University. Her research interests focus on wireless charging and wireless sensor networks.


Utilizing the Neglected Back Lobe for Mobile Charging

Meixuan Ren, Dié Wu and Jing Xue (Sichuan Normal University, China); Wenzheng Xu and Jian Peng (Sichuan University, China); Tang Liu (Sichuan Normal University, China)

0
Benefitting from the breakthrough of wireless power transfer technology, the lifetime of Wireless Sensor Networks (WSNs) can be significantly prolonged by scheduling a mobile charger (MC) to charge sensors. Compared with omnidirectional charging, the MC equipped with directional antenna can concentrate energy in the intended direction, making charging more efficient. However, all prior arts ignore the considerable energy leakage behind the directional antenna (i.e., back lobe), resulting in energy wasted in vain. To address this issue, we study a fundamental problem of how to utilize the neglected back lobe and schedule the directional MC efficiently. Towards this end, we first build and verify a directional charging model considering both main and back lobes. Then, we focus on jointly optimizing the number of dead sensors and energy usage effectiveness. We achieve these by introducing a scheduling scheme that utilizes both main and back lobes to charge multiple sensors simultaneously. Finally, extensive simulations and field experiments demonstrate that our scheme reduces the number of dead sensors by 49.5% and increases the energy usage effectiveness by 10.2% on average as compared with existing algorithms.
Speaker Tang Liu (Sichuan Normal University)

Tang Liu is currently a Professor and vice dean of College of Computer Science at Sichuan Normal University where he directs MobIle computiNg anD intelligence Sensing (MINDs) Lab. He received his B.S. degree in computer science from the University of Electronic and Science of China in 2003 and the M.S. and Ph.D. degrees in computer science from Sichuan University in 2009 and 2015, respectively. From 2015 to 2016, he was a Visiting Scholar with the University of Louisiana at Lafayette.

His current research interests include Internet of Things, Wireless Networks and Mobile Computing. He has published more than 30 peer-reviewed papers in technical conference proceedings and journals, including INFOCOM, TMC, TON, TOSN, IPDPS, TWC, TVT, etc. He has served as the Reviewer for the following journals: TMC, TOSN, Computer Networks, IEEE IoT J, and so on. He also has served as the TPC member of several conferences, such as HPCC, MSN, BigCom and EBDIT.


Charging Dynamic Sensors through Online Learning

Yu Sun, Chi Lin, Wei Yang, Jiankang Ren, Lei Wang, Guowei WU and Qiang Zhang (Dalian University of Technology, China)

0
As a novel solution for IoT applications, wireless rechargeable sensor networks (WRSNs) have achieved widespread deployment in recent years. Existing WRSN scheduling methods have focused extensively on maximizing the network charging utility in the fixed node case. However, when sensor nodes are deployed in dynamic environments (e.g., maritime environments) where sensors move randomly over time, existing approaches are likely to incur significant performance loss or even fail to execute normally. In this work, we focus on serving dynamic nodes whose locations vary randomly and formalize the dynamic WRSN charging utility maximization problem (termed MATA problem). Through discretizing candidate charging locations and modeling the dynamic charging process, we propose a near-optimal algorithm for maximizing charging utility. Moreover, we point out the long-short-term conflict of dynamic sensors that their location distributions in the short-term usually deviate from the long-term expectations. To tackle this issue, we further design an online learning algorithm based on the combinatorial multi-armed bandit (CMAB) model. It iteratively adjusts the charging strategy and adapts well to nodes' short-term location deviations. Extensive experiments and simulations demonstrate that the proposed scheme can effectively charge dynamic sensors and achieve a higher charging utility compared to baseline algorithms in both long-term and short-term.
Speaker Yu Sun

Yu Sun received B.E. and M.E. degrees from Dalian University of Technology, Dalian, China, in 2018 and 2020, respectively. He is studying for Ph.D. degree in School of Software Technology, Dalian University of Technology. His research interests cover wireless power transfer and wireless rechargeable sensor networks. He has authored near 10 papers in several journals and conferences including INFOCOM, IEEE/ACM ToN, ICNP, SECON, ICPP, and CN.


Session Chair

Yi Shi

Session D-7

Edge Computing 1

Conference
8:30 AM — 10:00 AM EDT
Local
May 19 Fri, 8:30 AM — 10:00 AM EDT
Location
Babbio 210

Adversarial Group Linear Bandits and Its Application to Collaborative Edge Inference

Yin Huang, Letian Zhang and Jie Xu (University of Miami, USA)

0
Multi-armed bandits is a classical sequential decision making under uncertainty problem, which has applications in many fields including computer and communication networks. The majority of existing works study bandits problems in either the stochastic regime or the adversarial regime, but the intersection of these two regimes is much less investigated. In this paper, we study a new bandits problem, called adversarial group linear bandits (AGLB), that features reward generation as a joint outcome of both the stochastic process and the adversarial behavior. In particular, the reward that the learner receives is a result of not only the group and arm that the learner selects but also the group-level attack decision by the adversary. AGLB models many real-world problems, such as collaborative edge inference and multi-site online ad placement. To combat the uncertainty in the coupled stochastic and adversarial rewards, we develop a new bandits algorithm, called EXPUCB, which marries the classical LinUCB and EXP3 algorithms, and prove its sublinear regret. We apply EXPUCB to the collaborative edge inference problem to evaluate its performance. Extensive simulation results verify the superior learning ability of EXPUCB under coupled stochastic noises and adversarial attacks.
Speaker Yin Huang (University of Miami)

He is a Ph.D. candidate at the University of Miami, USA, and his primary research is multi-armed bandits and edge computing.


Online Container Scheduling for Data-intensive Applications in Serverless Edge Computing

Xiaojun Shang, Yingling Mao and Yu Liu (Stony Brook University, USA); Yaodong Huang (Shenzhen University, China); Zhenhua Liu and Yuanyuan Yang (Stony Brook University, USA)

1
Introducing the emerging serverless paradigm into edge computing could avoid over- and under-provisioning of limited edge resources and make complex edge resource management transparent to application developers, which largely facilitates the cost-effectiveness, portability, and short time-to-market of edge applications. However, the computation/data dispersion and device/network heterogeneity of edge environments prevent current serverless computing platforms from acclimating to the network edge. In this paper, we address such challenges by formulating a container placement and data flow routing problem, which fully considers the heterogeneity of edge networks and the overhead of operating serverless platforms on resource-limited edge servers. We design an online algorithm to solve the problem. We further show its local optimum for each arriving container and prove its theoretical guarantee to the optimal offline solution. We also conduct extensive simulations based on practical experiment results to show the advantages of the proposed algorithm over existing baselines.
Speaker Xiaojun Shang (Stony Brook University)

Xiaojun Shang received his B. Eng. degree in Information Science and Electronic Engineering from Zhejiang University, Hangzhou, China, and M.S. degree in Electronic Engineering from Columbia University, New York, USA. He is now pursuing his Ph.D. degree in Computer Engineering at Stony Brook University. His research interests lie in Edge AI, serverless edge computing, online algorithm design, virtual network functions, cloud computing. His current research focuses are enhancing processing and communication capabilities for data-intensive workflows with the edge-cloud synergy; ensuring highly reliable, efficient, and environment friendly network services in edge-cloud environments.


Dynamic Edge-centric Resource Provisioning for Online and Offline Services Co-location

Tao Ouyang, Kongyange Zhao, Xiaoxi Zhang, Zhi Zhou and Xu Chen (Sun Yat-sen University, China)

0
In general, online services should be quickly completed in a quite stable running environment to meet their tight latency constraint, while offline services can be processed in a loose manner for their elastic soft deadlines. To well coordinate such services at the resource-limited edge cluster, in this paper, we study an edge-centric resource provisioning optimization for online and offline services co-location, where the proxy seeks to maximize timely online service performances (e.g. completion rate) while maintaining satisfactory long-term offline service performances (e.g. average throughput). However, tricky hybrid temporal couplings for provisioning decisions arise due to heterogeneous constraints of the co-located services and their different time-scale performances. We hence first propose a reactive provisioning approach without requiring a prior knowledge of future system dynamics, which leverages a Lagrange relaxation for devising constraint-aware stochastic subgradient algorithm to deal with the challenge of hybrid couplings. To further boost the performance by integrating the powerful machine learning techniques, we also advocate a predictive provisioning approach, where the future request arrivals can be estimated accurately within a limited prediction window. With rigorous theoretical analysis and extensive trace-driven evaluations, we demonstrate the superior performance of our proposed algorithms.
Speaker Tao Ouyang (Sun Yat-sen University)

Tao Ouyang received the BS degree from the School of Information Science and Technology, University of International Relations, Beijing, China in 2017 and ME degree in 2019 from the School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China, where he is currently working toward the PhD degree with the School of Computer Science and Engineering. His research interests include mobile edge computing, online learning, and optimization.


TapFinger: Task Placement and Fine-Grained Resource Allocation for Edge Machine Learning

Yihong Li (Sun Yat-sen University, China); Tianyu Zeng (Sun Yat-Sen University, China); Xiaoxi Zhang (Sun Yat-sen University, China); Jingpu Duan (Peng Cheng Laboratory, China); Chuan Wu (The University of Hong Kong, Hong Kong)

0
Machine learning (ML) tasks are one of the major workloads in today's edge computing networks. Existing edge-cloud schedulers allocate the requested amounts of resources to each task, falling short of best utilizing the limited edge resources flexibly for ML task performance optimization. This paper proposes TapFinger, a distributed scheduler that minimizes the total completion time of ML tasks in a multi-cluster edge network, through co-optimizing task placement and fine-grained multi-resource allocation. To learn the tasks' uncertain resource sensitivity and enable distributed online scheduling, we adopt multi-agent reinforcement learning (MARL), and propose several techniques to make it efficient for our ML-task resource allocation. First, TapFinger uses a heterogeneous graph attention network as the MARL backbone to abstract inter-related state features into more learnable environmental patterns. Second, the actor network is augmented through a tailored task selection phase, which decomposes the actions and encodes the optimization constraints. Third, to mitigate decision conflicts among agents, we novelly combine Bayes' theorem and masking schemes to facilitate our MARL model training. Extensive experiments using synthetic and test-bed ML task traces show that TapFinger can achieve up to 28.6% reduction in the task completion time and improve resource efficiency as compared to state-of-the-art resource schedulers.
Speaker Yihong Li (Sun Yat-sen University)

Yihong Li received his bachelor’s degree from the School of Information Management, Sun Yat-sen University in 2021. He is currently pursuing a master’s degree with the School of Computer Science and Engineering, Sun Yat-sen University. His research interests include machine learning systems and networking.


Session Chair

Xiaonan Zhang

Session E-7

ML Applications

Conference
8:30 AM — 10:00 AM EDT
Local
May 19 Fri, 8:30 AM — 10:00 AM EDT
Location
Babbio 219

Ant Colony based Online Learning Algorithm for Service Function Chain Deployment

Yingling Mao, Xiaojun Shang and Yuanyuan Yang (Stony Brook University, USA)

1
Network Function Virtualization (NFV) emerges as a promising paradigm with the potential for cost-efficiency, manage-convenience, and flexibility, where the service function chain (SFC) deployment scheme is a crucial technology. In this paper, we propose an Ant Colony Optimization (ACO) meta-heuristic algorithm for the Online SFC Deployment, called ACO-OSD, with the objectives of jointly minimizing the server operation cost and network latency. As a meta-heuristic algorithm, ACO-OSD performs better than the state-of-art heuristic algorithms, specifically 42.88% lower total cost on average. To reduce the time cost of ACO-OSD, we design two acceleration mechanisms: the Next-Fit (NF) strategy and the many-to-one model between SFC deployment schemes and ant tours. Besides, for the scenarios requiring real-time decisions, we propose a novel online learning framework based on the ACO-OSD algorithm, called prior-based learning real-time placement (PLRP). It realizes near real-time SFC deployment with the time complexity of O(n), where n is the total number of VNFs of all newly arrived SFCs. It meanwhile maintains a performance advantage with 36.53% lower average total cost than the state-of-art heuristic algorithms. Finally, we perform extensive simulations to demonstrate the outstanding performance of ACO-OSD and PLRP compared with the benchmarks.
Speaker Yingling Mao (Stony Brook University)

Yingling Mao received her B.S. degree in Mathematics and Applied Mathematics in Zhiyuan College from Shanghai Jiao Tong University, Shanghai, China, in 2018. She is currently working toward the Ph.D degree in the Department of Electrical and Computer Engineering, Stony Brook University. Her research interests include network function virtualization, edge computing, cloud computing and quantum networks.


AutoManager: a Meta-Learning Model for Network Management from Intertwined Forecasts

Alan Collet and Antonio Bazco Nogueras (IMDEA Networks Institute, Spain); Albert Banchs (Universidad Carlos III de Madrid, Spain); Marco Fiore (IMDEA Networks Institute, Spain)

0
A variety of network management and orchestration (MANO) tasks take advantage of predictions to support anticipatory decisions. In many practical scenarios, such predictions entail two largely overlooked challenges: (i) the exact relationship between the predicted values (e.g., allocated resources) and the performance objective (e.g., quality of experience of end users) in many cases is tangled and cannot be known a priori, and (ii) multiple predictions contribute to the objective in an intertwined way (e.g., resources are limited and must be shared among competing flows). We present AutoManager, a novel meta-learning model that can support complex MANO tasks by addressing these two challenges. Our solution learns how multiple intertwined predictions affect a common performance goal, and steers them so as to attain the correct operation point under a priori unknown loss function. We demonstrate AutoManager in practical use cases and with real-world traffic measurements, showing how it can achieve substantial gains over state-of-the art approaches
Speaker Alan Collet

Alan Collet is a Ph.D. Student at IMDEA Networks Institute. He obtained two Master's degrees, one from the Illinois Institute of Technology, Chicago, USA, and one from the ENSEIRB-MATMECA, Bordeaux, France. His primary research interest and thesis subject is self-learning network intelligence.


Federated PCA on Grassmann Manifold for Anomaly Detection in IoT Networks

Tung Anh Nguyen, Jiayu He, Long Tan Le, Wei Bao and Nguyen H. Tran (The University of Sydney, Australia)

0
In the era of Internet of Things (IoT), network-wide anomaly detection is a crucial part of monitoring IoT networks due to the inherent security vulnerabilities of most IoT devices. Principal Components Analysis (PCA) has been proposed to separate network traffics into two disjoint subspaces corresponding to normal and malicious behaviors for anomaly detection. However, the privacy concerns and limitations of devices' computing resources compromise the practical effectiveness of PCA. We propose a federated PCA-based Grassmannian optimization framework that coordinates IoT devices to aggregate a joint profile of normal network behaviors for anomaly detection. First, we introduce a privacy-preserving federated PCA framework to simultaneously capture the profile of various IoT devices' traffic. Then, we investigate the alternating direction method of multipliers gradient-based learning on the Grassmann manifold to guarantee fast training and the absence of detecting latency using limited computational resources. Empirical results on the NSL-KDD dataset demonstrate that our method outperforms baseline approaches. Finally, we show that the Grassmann manifold algorithm is highly adapted for IoT anomaly detection, which permits drastically reducing the analysis time of the system. To the best of our knowledge, this is the first federated PCA algorithm for anomaly detection meeting the requirements of IoT networks.
Speaker Nguyen H. Tran (The University of Sydney)

Nguyen H. Tran received BS and PhD degrees (with best PhD thesis award in 2011), from HCMC University of Technology and Kyung Hee University, in electrical and computer engineering, in 2005 and 2011, respectively. Dr Tran is an Associate Professor at the School of Computer Science, The University of Sydney. He was an Assistant Professor with Department of Computer Science and Engineering, Kyung Hee University, from 2012 to 2017. His research group has special interests in Distributed compUting, optimizAtion, and machine Learning (DUAL group). He received several best paper awards, including IEEE ICC 2016 and ACM MSWiM 2019. He receives the Korea NRF Funding for Basic Science and Research 2016-2023, ARC Discovery Project 2020-2023, and SOAR award 2022-2023. He serves as an Editor for several journals such as IEEE Transactions on Green Communications and Networking (2016-2020), IEEE Journal of Selected Areas in Communications 2020 in the area of distributed machine learning/Federated Learning, and IEEE Transactions on Machine Learning in Communications Networking (2022).


QueuePilot: Reviving Small Buffers With a Learned AQM Policy

Micha Dery, Orr Krupnik and Isaac Keslassy (Technion, Israel)

0
There has been much research effort on using small buffers in backbone routers, as they would provide lower delays for users and free capacity for vendors. Unfortunately, with small buffers, droptail policy has an excessive loss rate, and existing AQM (active queue management) policies can be unreliable.

We introduce QueuePilot, an RL (reinforcement learning)-based AQM that enables small buffers in backbone routers, trading off high utilization with low loss rate and short delay. QueuePilot automatically tunes the ECN (early congestion notification) marking probability. After training once offline with a variety of settings, QueuePilot produces a single lightweight policy that can be applied online without further learning. We evaluate QueuePilot on real networks with hundreds of TCP connections, and show how it provides a performance in small buffers that exceeds that of existing algorithms, and even exceeds their performance with larger buffers.
Speaker Micha Dery (Technion)

Micha Dery received his B.Sc. and M.Sc. from the Department of Electrical and Computer Engineering at the Technion - Israel Institute of Technology. He is interested in ML applications in networking, mobile ad-hoc networks, and distributed systems.


Session Chair

Baochun Li (University of Toronto)

Session F-7

Distributed Learning

Conference
8:30 AM — 10:00 AM EDT
Local
May 19 Fri, 8:30 AM — 10:00 AM EDT
Location
Babbio 220

Matching DNN Compression and Cooperative Training with Resources and Data Availability

Francesco Malandrino (CNR-IEIIT, Italy); Giuseppe Di Giacomo (Politecnico Di Torino, Italy); Armin Karamzade (University of California Irvine, USA); Marco Levorato (University of California, Irvine, USA); Carla Fabiana Chiasserini (Politecnico di Torino & CNIT, IEIIT-CNR, Italy)

0
To make machine learning (ML) sustainable and apt to running on diverse devices where relevant data is, it is essential to compress ML models as needed, while still meeting the required learning quality and time performance. However, how much and when an ML model should be compressed, and where its training should be executed, are hard decisions to make, as they depend on the model itself, the resources of the available nodes, and the data such nodes own. We model the network system focusing on the training of DNNs, formalize the above problem, and, given its NP-hardness, propose an approximate dynamic programming problem that we solve through the PACT algorithmic framework. Importantly, the latter leverages a time-expanded graph representing the learning process, and a data-driven and theoretical approach for the prediction of the loss trajectories to be expected as a consequence of training decisions. We prove that PACT's solutions can get as close to the optimum as desired, at the cost of an increased time complexity, and that, in any case, its worst-case complexity is polynomial. Numerical results also show that, even under the most disadvantageous settings, PACT outperforms state-of-the-art alternatives and closely matches the minimum overall energy cost.
Speaker Carla Fabiana Chiasserini (Politecnico di Torino)

Carla Fabiana Chiasserini is an is an IEEE Fellow and a Full Professor at Politecnico di Torino, Italy. Her research interests include architectures, protocols, and performance analysis of wireless networks and networking support to machine learning.


On the Limit Performance of Floating Gossip

Gianluca Rizzo (HES SO Valais, Switzerland & Universita' di Foggia, Italy); Noelia Perez Palma (University of Murcia & University Carlos III, Spain); Vincenzo Mancuso and Marco G Ajmone Marsan (IMDEA Networks Institute, Spain)

0
In this paper we investigate the limit performance of Floating Gossip (FG), a new, fully distributed Gossip Learning (GL) scheme which relies on Floating Content (FC) to implement location-based probabilistic model storage in an infrastructure-less manner.

We consider dynamic scenarios where continuous learning is required, and we adopt a mean field approach to investigate the limit performance of FG in terms of amount of data that users can incorporate into their models, as a function of the main system parameters. Differently from existing approaches in which either communication or computing aspects of GL are analyzed and optimized, our approach accounts for the compound impact of both aspects. We validate our results through detailed simulations, proving good accuracy. Our model shows that Floating Gossip can be very effective in implementing continuous training and update of machine learning models in a cooperative manner, and based on opportunistic exchanges among moving users.
Speaker Gianluca Rizzo
Gianluca Rizzo received the degree in Electronic Engineering from Politecnico di Torino, Italy, in 2001. From September 2001 to December 2003, he has been a researcher in Telecom Italia Lab, Torino, Italy. From 2004 to 2008 he has been at EPFL Lausanne, where in 2008 he received his PhD in Computer Science. From 2009 to 2013 he has been Staff Researcher at Institute IMDEA Networks in Madrid, Spain. Since April 2013 he is Senior Researcher at HES SO Valais, Switzerland. His research interests are in performance evaluation of Computer Networks, and particularly on Network Calculus, and in Green Networking.



Communication-Aware DNN Pruning

Tong Jian, Debashri Roy, Batool Salehihikouei, Nasim Soltani, Kaushik Chowdhury and Stratis Ioannidis (Northeastern University, USA)

1
We propose a Communication-aware Pruning (CaP) algorithm, a novel distributed inference framework for distributing DNN computations across a physical network. Departing from conventional pruning methods, CaP takes the physical network topology into consideration and produces DNNs that are communication-aware, designed for both accurate and fast execution over such a distributed deployment. Our experiments on CIFAR-10 and CIFAR-100, two deep learning benchmark datasets, show that CaP beats state-of-the-art competitors by up to 4% w.r.t. accuracy. On experiments over real-world scenarios, it simultaneously reduces total execution time by 27%-68% at negligible performance decrease (less than 1%).
Speaker Tong Jian (Analog Devices; Northeastern University)

Tong Jian is a Machine Learning Scientist at Analog Devices, in Boston, MA, where she is working on AI for Science and building AI solutions for intelligent edge. She completed her Ph.D. in Computer Engineering from Northeastern University in Boston 2022, where she specialized in researching adversarial robustness and applied machine learning for wireless communication. During her Ph.D., she gained industry experience through internships at Nokia Bell Labs, where she worked on indoor WiFi localization, and at Amazon, focusing on improving their SOTA recommendation systems.


OPA: One-Predict-All For Efficient Deployment

Junpeng Guo, Shengqing Xia and Chunyi Peng (Purdue University, USA)

0
Deep neural network (DNN) has become a ubiquitous technique in mobile and embedded systems for various vision applications. The best-fit DNN is determined by various deployment factors like source data, compute power, and QoS requirements. A ''Train Once, Deploy Everywhere'' paradigm is proposed by providing a huge number of sub-networks (subnets) to fit different deployment scenarios. However, deployment factors are numerous and often dynamically changing leading to exponentially scenarios. It is computationally impossible to enumerate all the subnets for all scenarios to find the best-fit one. Existing works only work on a coarse granularity. They mostly consider static factors like hardware capabilities and fail to capture dynamics on both computing resources and data contents. In this work, we propose OPA to adapt to all deployment scenarios. The core idea is to run a shallow subnet as a pioneer to perceive the actual condition of the current deployment and use its performance to predict all other subnets'. At runtime, we quickly locate subnets that have similar latency with requirements and conduct a small-scale search. Compared to the state-of-the-art, OPA achieves up to 26% higher Top-1 accuracy for a given latency requirement.
Speaker Junpeng Guo (Purdue University)

Junpeng Guo is a Ph.D. candidate at Purdue University supervised by Prof.Chunyi Peng. His research interests are in the interdisciplinary field of mobile computing and computer vision, with a focus on building efficient mobile vision systems. He is currently seeking a summer internship in either a research lab or industry in the upcoming seasons.


Session Chair

Christopher G. Brinton

Session G-7

TCP and Congestion Control

Conference
8:30 AM — 10:00 AM EDT
Local
May 19 Fri, 8:30 AM — 10:00 AM EDT
Location
Babbio 221

i-NVMe: Isolated NVMe over TCP for a Containerized Environment

Seongho Lee, Ikjun Yeom and Younghoon Kim (Sungkyunkwan University, Korea (South))

0
Non-Volatile Memory Express (NVMe) over TCP is an efficient technology for accessing remote Solid State Drives (SSDs); however, it may cause a serious interference issue when used in a containerized environment. In this study, we propose a performance isolation scheme for NVMe over TCP in such an environment. The proposed scheme measures the CPU usage of the NVMe over TCP worker, charges it to containers in proportion to their NVMe traffic, and schedules containers to ensure isolated sharing of the CPU. However, because the worker runs with a higher priority than normal containers, it may not be possible to achieve performance isolation with container scheduling alone. To solve this problem, we also control the CPU usage of the worker by throttling NVMe over TCP traffic. The proposed scheme is implemented on a real testbed for evaluation. We perform extensive experiments with various workloads and demonstrate that the scheme can provide performance isolation even in the presence of excessive NVMe traffic.
Speaker Lee Seongho (Sungkyunkwan University, South Korea)

He is currently working toward the Ph.D. degree in computer science at Sungkyunkwan University, South Korea. His research interests include optimizing containerized environments and the CPU scheduling.


Congestion Control Safety via Comparative Statics

Pratiksha Thaker (Carnegie Mellon University, USA); Tatsunori Hashimoto and Matei Zaharia (Stanford University, USA)

0
When congestion control algorithms compete on shared links, unfair outcomes can result, especially between algorithms that aim to prioritize different objectives. For example, a throughput-maximizing application could make the link completely unusable for a latency-sensitive application. In order to study these outcomes formally, we model the congestion control problem as a game in which agents have heterogeneous utility functions. We draw on the comparative statics literature in economics to derive simple and practically useful conditions under which all agents achieve at least \(\epsilon\) utility at equilibrium, a minimal safety condition for the network to be useful for any application. Compared to prior analyses of similar games, we show that our framework supports a more realistic class of utility functions that includes highly latency-sensitive applications such as teleconferencing and online gaming.
Speaker Pratiksha Thaker (Carnegie Mellon University)

Pratiksha Thaker is a postdoctoral researcher at Carnegie Mellon University. She is interested in applying tools from learning theory and game theory to practical systems problems.


Gemini: Divide-and-Conquer for Practical Learning-Based Internet Congestion Control

Wenzheng Yang and Yan Liu (Tencent, China); Chen Tian (Nanjing University, China); Junchen Jiang (University of Chicago, USA); Lingfeng Guo (The Chinese University of Hong Kong, Hong Kong)

0
Learning-based Internet congestion control algorithms have attracted much attention due to their potential performance improvement over traditional algorithms. However, such performance improvement is usually at the expense of black-box design and high computational overhead, which prevent them from large-scale deployment over production networks. To address this problem, we propose a novel Internet congestion control algorithm called Gemini. It contains a parameterized congestion control module, which is white-box designed with low computational overhead, and an online parameter optimization module, which serves to adapt the parameterized congestion control module to different networks for higher transmission performance. Extensive trace-driven emulations reveal Gemini achieves better balances between delay and throughput than state-of-the-art algorithms. Moreover, we successfully deploy Gemini over production networks. The evaluation results show that the average throughput of Gemini is 5% higher than that of Cubic (4% higher than that of BBR) over a mobile application downloading service and 61% higher than that of Cubic (33% higher than that of BBR) over a commercial network speed-test benchmarking service.
Speaker Wenzheng Yang (Nanjing University and Tencent, China)



Marten: A Built-in Security DRL-Based Congestion Control Framework by Polishing the Expert

Zhiyuan Pan and Jianer Zhou (SUSTech, China); XinYi Qiu ( & Peng Cheng Laboratory, China); Weichao Li (Peng Cheng Laboratory, China); Heng Pan (Institute of Computing Technology, Chinese Academy of Sciences, China); Wei Zhang (The National Computer Network Emergency Response Technical Team Coordination Center of China, China)

0
Deep reinforcement learning (DRL) has been proved to be an effective method to improve the congestion control algorithms (CCAs). However, the lack of training data and training scale affect the effectiveness of DRL model. Combining rule-based CCAs (such as BBR) as a guide for DRL is an effective way to improve learning-based CCAs. By experiment measurement, we find that the rule-based CCAs limit the action exploration and even trigger wrong actions to gain higher DRL's reward gain. To overcome the constraints, we propose Marten, a framework which improves the effectiveness of rule-based CCAs for DRL. Marten uses entropy as the degree of exploration and uses it to expand the exploration of DRL. Furthermore, Marten introduces the safety mechanism to avoid wrong DRL actions. We have implemented Marten in both simulation platform OpenAI Gym and deployment platform QUIC. The experimental results in production network demonstrate Marten can improve throughput by 11% and reduce latency by 8% on average compared with Eagle.
Speaker Zhiyuan Pan (Sothern University of Science and Technology)

Zhiyuan Pan is studying for a master's degree at Southern University of Science and Technology. His main research topics are network congestion control algorithms and deep reinforcement learning algorithms.


Session Chair

Ehab Al-Shaer

Session Break-1-Day3

Coffee Break

Conference
10:00 AM — 10:30 AM EDT
Local
May 19 Fri, 10:00 AM — 10:30 AM EDT
Location
Babbio Lobby

Session Poster-2

Poster Session 2

Conference
10:00 AM — 12:00 PM EDT
Local
May 19 Fri, 10:00 AM — 12:00 PM EDT
Location
Babbio Lobby

Quantifying the Impact of Base Station Metrics on LTE Resource Block Prediction Accuracy

Darijo Raca (University of Sarajevo, Bosnia and Herzegovina); Jason J Quinlan, Ahmed H. Zahran and Cormac J. Sreenan (University College Cork, Ireland); Riten Gupta (Meta Platforms, Inc., USA); Abhishek Tiwari (Meta Platforms Inc., USA)

0
Accurate prediction of cellular link performance represents a corner stone for many adaptive applications, such as video streaming. State-of-the-art solutions focus on distributed device-based methods relying on historic throughput and PHY metrics obtained through device APIs. In this paper, we study the impact of centralised solutions that integrate information collected from other network nodes. Specifically, we develop and compare machine learning inference engines for both distributed and centralised approaches to predict the LTE physical resource blocks using ns3-simulation. Our results illustrate that network load represents the most important feature in the centralised approaches resulting in halving the RB prediction error to 14% in comparison to 28% for the distributed case.
Speaker
Speaker biography is not available.

Meta-material Sensors-enabled Internet of Things: Angular Range Extension

Taorui Liu (Peking University, China); Jingzhi Hu (Nanyang Technological University, Singapore); Hongliang Zhang and Lingyang Song (Peking University, China)

0
In the coming 6G communications, the internet of things (IoT) is the core foundation technology for various key areas. According to related studies, the number of IoT sensors deployed in 6G will be approximately 10 times larger than that in 5G. Therefore, IoT sensors in 6G are required to have extremely low cost, low power consumption, and high robustness so that they can effectively sustain massive deployment. However, traditional sensors containing costly and vulnerable fine structures and requiring external power sources can hardly meet the above requirements. Fortunately, meta-material IoT (meta-IoT) sensors are simple in structure, low in cost, and can obtain power supply via wireless signals, showing great potential for application.

However, existing meta-IoT sensing systems are limited to normal or specular reflection, while in practical sensing scenarios such as smart home, intelligent industry, transportation, and agriculture, the receivers are often deployed on movable objects such as mobile phones, intelligent robots, and vehicles. Thus, the position of the receivers relative to the meta-IoT sensor array is generally dynamic within an angular range rather than at a particular angle, which is challenging as all units in existing meta-IoT sensors are assumed to be the same, resulting in an uncontrollable reflection direction. To address this challenge, we propose a design of a meta-IoT sensing system comprising meta-IoT sensors that can support transmitter deployment at any given angle and receiver deployment in an extended angular range.
Speaker
Speaker biography is not available.

MatGAN: Sleep Posture Imaging using Millimeter-Wave Devices

Aakriti Adhikari, Sanjib Sur and Siri Avula (University of South Carolina, USA)

0
This work presents MatGAN, a system that uses millimeter-wave (mmWave) signals to capture high-quality images of a person's body while they sleep, even if they are covered under a blanket. Unlike existing sleep monitoring systems, MatGAN enables fine-grained monitoring and is privacy non-invasive, and can work under obstruction and low-light conditions, critical for sleep monitoring. MatGAN utilizes generative models to generate high-quality images from mmWave reflected signals that accurately represent sleep postures under a blanket. Early results indicate that MatGAN can effectively generate sleep posture images with a median IoU of 0.64.
Speaker Aakriti Adhikari (University of South Carolina)

Aakriti Adhikari is currently pursuing her Ph.D. in the Department of Computer Science and Engineering at the University of South Carolina, Columbia. Her research focuses on wireless systems and ubiquitous sensing, particularly in developing at-home wireless solutions in the healthcare domain using millimeter-wave (mmWave) technology in 5G and beyond devices. Her research has been regularly published in top conferences in these areas, such as IEEE SECON, ACM IMWUT/UBICOMP, HotMobile, and MobiSys. Aakriti has received multiple awards, including student travel grants for conferences like IEEE INFOCOM (2023), ACM HotMobile (2023), and Mobisys (2022). Additionally, she currently has three patents pending.  She has also been invited to participate in the CRA-WP Grad Cohort for Women (2023) and Grace Hopper Celebration (2020, 2021).



Poster Abstract: Performance of Scalable Cell-Free Massive MIMO in Practical Network Topologies

Yunlu Xiao (RWTH Aachen University, Germany); Petri Mähönen (RWTH Aachen University, Germany & Aalto University, Finland); Ljiljana Simić (RWTH Aachen University, Germany)

1
We study the performance of scalable cell-free massive MIMO (CF-mMIMO) in practical urban network topologies in the cities of Frankfurt and Seoul. Our results show that the gains of CF-mMIMO - in terms of the high and uniform network throughput - are limited in practical urban topologies, compared with what is expected from theory for randomly uniformly distributed networks. We show that this is due to the locally non-uniform spatial distributions of access points - characteristic of realistic network topologies, resulting in inferior throughput, especially for the worst-served users in the network.
Speaker Ljiljana Simić; Yunlu Xiao
Dr Ljiljana Simić is currently Principal Scientist at the Institute for Networked Systems at RWTH Aachen University, Germany. She received her BE (Hons.) and PhD degrees in Electrical and Electronic Engineering from The University of Auckland in 2006 and 2011, respectively. Her research interests are in mm-wave networking, efficient spectrum sharing paradigms, cognitive and cooperative communication, self-organizing and distributed networks, and telecommunications policy. She was Co-Chair of the IEEE INFOCOM 2018 Workshop on Millimeter-Wave Networked Systems, Co-Chair of the ACM MobiCom 2019 Workshop on Millimeter-wave Networks and Sensing Systems, and Guest Editor of an IEEE JSAC Special Issue on Millimeter-Wave Networking. She is serving as an Associate Editor of IEEE Networking Letters and Editor of IEEE Transactions on Wireless Communications.

Towards a Network Aware Model of the Time Uncertainty Bound in Precision Time Protocol

Yash Deshpande and Philip Diederich (Technical University of Munich, Germany); Wolfgang Kellerer (Technische Universität München, Germany)

1
Synchronizing the system time between devices by exchanging timestamped messages over the network is a popular method to achieve time-consistency in distributed applications. Accurate time synchronization is essential in applications such as cellular communication, industrial control, and transactional databases. These applications consider the maximum possible time offset or the Time Uncertainty Bound(TUB) in the network while configuring their guard bands and waiting times. Choosing the right value for the TUB poses a fundamental challenge to the system designer - a conservatively high value of the TUB decreases the chances of time-based byzantine faults but increases latency due to larger guard bands and waiting times. The TUB is affected by packet delay variation(PDV) of the time synchronization messages due to congestion from background network traffic. In this work, we use Network Calculus (NC) to derive the relation between network traffic and the TUB for a network built with commercial off-the-shelf (COTS)hardware. For centrally deployed and monitored local area networks (LAN)s such as in cellular networks and datacenters, this relation could be useful for system designers to plug a better-informed value of the TUB.
Speaker
Speaker biography is not available.

L7LB: High Performance Layer-7 Load Balancing on Heterogeneous Programmable Platforms

Xiaoyi Shi, Yifan Li, Chengjun Jia, Xiaohe Hu and Jun Li (Tsinghua University, China)

1
Layer-7 load balancing is an essential pillar in modern enterprise infrastructure. It is inefficient to scale software layer-7 load balancing which requires hundreds of servers to meet the large scale service requirements of 1Tbps throughput and 1M concurrent requests. In this paper, we present L7LB with a novel fast path and slow path co-design architecture running on the heterogeneous programmable server-switch. L7LB is scalable by forwarding most data packets on the Tbps bandwidth switch chip and using CPU to process application connections and efficient by replacing hundreds of servers with one server-switch. The preliminary prototype demonstrates the layer-7 load balancing functionality and shows that L7LB can meet the large scale service requirements.
Speaker
Speaker biography is not available.

Deep Learning enabled Keystroke Eavesdropping Attack over Videoconferencing Platforms

Xueyi Wang, Yifan Liu and Shan Cang Li, S. (Cardiff University, United Kingdom (Great Britain))

1
The COVID-19 pandemic have significantly impacted people by driving people working from home using communication tools such as Zoom, Teams, Slack, \textit{etc}. The users of these communication services have exponentially increased in the past two years, e.g., Teams annual users reached 270 million in 2022 and Zoom averaged 300 million daily active users in videoconferencing platforms. However, using edging artificial intelligence techniques, new cyber attacking tools expose these services to eavesdropping or disruption. This work investigates keystroke eavesdropping attacks on physical keyboards using deep learning techniques to analysis the acoustic emanation of keystroke audios to identify victims' keystrokes. An accurate context-free inferring algorithm was developed that can automatically localise keystrokes during inputs. The experimental results demonstrated that the accuracy of keystroke inference approaches around 90\% over normal laptop keyboards.
Speaker Xueyi Wang
Speaker biography is not available.

Explaining AI-informed Network Intrusion Detection with Counterfactuals

Gang Liu and Jiang Meng (University of Notre Dame, USA)

0
Artificial intelligence (AI) methods have been widely applied for accurate network intrusion detection (NID). However, the developers and users of the NID systems could not understand the systems' correct or incorrect decisions due to the complexity and black-box nature of the AI methods. This is a two-page poster paper that presents a new demo system that offers a number of counterfactual explanations visually for any data example. The visualization results were automatically generated: users just need to provide the index of a data example and do not edit anything on the graph. In the future, we will extend the detection task from binary classification to multi-class classification.
Speaker Jiang Meng
Speaker biography is not available.

Sum Computation Rate Maximization in Self-Sustainable RIS-Assisted MEC

Han Li (Beijing Jiaotong University, China); Ming Liu (Beijing Jiaotong University & Beijing Key Lab of Transportation Data Analysis and Mining, China); Bo Gao and Ke Xiong (Beijing Jiaotong University, China); Pingyi Fan (Tsinghua University, China); Khaled B. Letaief (The Hong Kong University of Science and Technology, Hong Kong)

0
This paper studies a self-sustainable reconfigurable intelligent surface (SRIS)-assisted mobile edge computing (MEC) network, where a SRIS first harvests energy from a hybrid access point (HAP) and then enhances the users' offloading performance with the harvested energy. To improve computing efficiency, a sum computation rate maximization problem is formulated. Based on the alternating optimization (AO) method, an efficient algorithm is proposed to solve the formulated non-convex problem. Simulation results show that when the SRIS is deployed closer to the HAP, a higher performance gain can be achieved.
Speaker
Speaker biography is not available.

Tandem Attack: DDoS Attack on Microservices Auto-scaling Mechanisms

Anat Bremler-Barr (Tel-Aviv University, Israel); Michael Czeizler (Reichman University, Israel)

0
Auto-scaling is a well-known mechanism for adapting systems to dynamic loads of traffic by increasing (scale-up) and decreasing (scale-down) the number of handling resources automatically. As software development shifted to micro-services architecture, large software systems are nowadays composed of many independent micro-services, each responsible for specific tasks. The breakdown to fragmented applications influenced also on the infrastructure side where different services of the same application are given different hardware configurations and scaling properties. Even though created to accelerate software development the micro-services approach also presents a new challenge - as systems grow larger, incoming traffic triggers multiple calls between micro-services to handle each request.
Speaker Michael Czeizler
Speaker biography is not available.

Session A-8

Internet/Web Security

Conference
10:30 AM — 12:00 PM EDT
Local
May 19 Fri, 10:30 AM — 12:00 PM EDT
Location
Babbio 122

De-anonymization Attacks on Metaverse

Yan Meng, Yuxia Zhan, Jiachun Li, Suguo Du and Haojin Zhu (Shanghai Jiao Tong University, China); Sherman Shen (University of Waterloo, Canada)

0
Virtual reality (VR) provides users with an immersive experience as a fundamental technology in the metaverse. One of the most promising properties of VR is that users' identities can be protected by changing their physical world appearances into arbitrary virtual avatars. However, recent proposed de-anonymization attacks demonstrate the feasibility of recognizing the user's identity behind the VR avatar's masking. In this paper, we propose AvatarHunter, a non-intrusive and user-unconscious de-anonymization attack based on victims' inherent movement signatures. AvatarHunter imperceptibly collects the victim avatar's gait information via recording videos from multiple views in the VR scenario without requiring any permission. A Unity-based feature extractor is designed that preserves the avatar's movement signature while immune to the avatar's appearance changes. Real-world experiments are conducted in VRChat, one of the most popular VR applications. The experimental results demonstrate that AvatarHunter can achieve attack success rates of 92.1% and 66.9% in closed-world and open-world avatar settings, respectively, which are much better than existing works.
Speaker Yan Meng (Shanghai Jiao Tong University)

Yan Meng is a Research Assistant Professor in the Department of Computer Science and Engineering at Shanghai Jiao Tong University. He received his Ph.D. degree from the Shanghai Jiao Tong University (2016–2022) and his B.Eng. degree from the Huazhong University of Science and Technology (2012–2016). His research focuses on IoT security, voice interface security, and privacy policy analysis. He has published 25 research papers, mainly in INFOCOM, CCS, USENIX Security, TDSC, and TMC. He won the Best Paper Award from the SocialSec in 2015. He is the recipient of the 2022 ACM China Excellent Doctoral Dissertation Award.


DisProTrack: Distributed Provenance Tracking over Serverless Applications

Utkalika Satapathy and Rishabh Thakur (Indian Institute of Technology Kharagpur, India); Subhrendu Chattopadhyay (Institute for Developemnt and Research in Banking Technologies, India); Sandip Chakraborty (Indian Institute of Technology Kharagpur, India)

1
Provenance tracking has been widely used in the recent literature to debug system vulnerabilities and find the root causes behind faults, errors, or crashes over a running system. However, the existing approaches primarily developed graph-based models for provenance tracking over monolithic applications running directly over the operating system kernel. In contrast, the modern DevOps-based service-oriented architecture relies on distributed platforms, like serverless computing that uses container-based sandboxing over the kernel. Provenance tracking over such a distributed micro-service architecture is challenging, as the application and system logs are generated asynchronously and follow heterogeneous nomenclature and logging formats. This paper develops a novel approach to combining system and micro-services logs together to generate a Universal Provenance Graph (UPG) that can be used for provenance tracking over serverless architecture. We develop a Loadable Kernel Module (LKM) for runtime unit identification over the logs by intercepting the system calls with the help from the control flow graphs over the static application binaries. Finally, we design a regular expression-based log optimization method for reverse query parsing over the generated UPG. A thorough evaluation of the proposed UPG model with different benchmarked serverless applications shows the system's effectiveness.
Speaker Utkalika Satapathy(Indian Institute of Technology, Kharagpur, India)

I am a Research Scholar in the Department of Computer Science and Engineering at the Indian Institute of Information Technology (IIT) Kharagpur, India. Under the supervision of Prof. Sandip Chakraborty, I am pursuing my Ph.D.

In addition, I am a member of the research group Ubiquitous Networked Systems Lab (UbiNet) at IIT Kharagpur, India. As for my research interests, they revolve around the areas of Systems, Provenance Tracking, and Distributed systems.


ASTrack: Automatic detection and removal of web tracking code with minimal functionality loss

Ismael Castell-Uroz (Universitat Politècnica de Catalunya, Spain); Kensuke Fukuda (National Institute of Informatics, Japan); Pere Barlet-Ros (Universitat Politècnica de Catalunya, Spain)

0
Recent advances in web technologies make it more difficult than ever to detect and block web tracking systems. In this work, we propose ASTrack, a novel approach to web tracking detection and removal. ASTrack uses an abstraction of the code structure based on Abstract Syntax Trees to selectively identify web tracking functionality shared across multiple web services. This new methodology allows us to: (i) effectively detect web tracking code even when using evasion techniques (e.g., obfuscation, minification or webpackaging), and (ii) to safely remove those portions of code related to tracking purposes without affecting the legitimate functionality of the website. Our evaluation with the top 10K most popular Internet domains shows that ASTrack can detect web tracking with high precision (98%), while discovering about 50K tracking code pieces and more than 3,400 new tracking URLs not previously recognized by most popular privacy-preserving tools (e.g., uBlock Origin). Moreover, ASTrack achieved a 36% reduction of functionality loss in comparison with the filter lists, one of the safest options available. Using a novel methodology that combines computer vision and manual inspection we estimate that full functionality is preserved in more than 97% of the websites.
Speaker Ismael Castell-Uroz (Universitat Politècnica de Catalunya)

Ismael Castell-Uroz is a Ph.D. student at the Computer Architecture Department of the Universitat Politècnica de Catalunya (UPC), Barcelona, Spain, where he received the B.Sc. degree in Computer Science in 2008 and the M.Sc. degree in Computer Architecture, Networks, and Systems in 2010. He has several years of experience in network and system administration and currently holds a Projects Scholarship at UPC. His expertise and research interest are in computer networks, especially in the field of network monitoring, anomaly detection, internet privacy and web tracking.


Secure Middlebox Channel over TLS and its Resiliency against Middlebox Compromise

Kentaro Kita, Junji Takemasa, Yuki Koizumi and Toru Hasegawa (Osaka University, Japan)

0
A large portion of Internet traffic passes through middleboxes that read or modify messages. However, as more traffic is protected with TLS, middleboxes are becoming unable to provide their functions. To leverage middlebox functionality while preserving communication security, secure middlebox channel protocols have been designed as extensions of TLS. A key idea is that the endpoints explicitly incorporate middleboxes into the TLS handshake and grant each middlebox either the read or the write permission for their messages. Because each middlebox has the least data access privilege, these protocols are resilient against the compromise of a single middlebox. However, the existing studies have not comprehensively analyzed the communication security under the scenarios where multiple middleboxes are compromised. In this paper, we present novel attacks that break the security of the existing protocols under such scenarios and then modify maTLS, the state-of-the-art protocol, so that all the attacks are prevented with marginal overhead.
Speaker Kentaro Kita (Osaka University)

Kentaro Kita received his Ph.D. in information science from Osaka University. His research interests include privacy, anonymity, security, and future networking architecture.


Session Chair

Ning Zhang

Session B-8

Scheduling

Conference
10:30 AM — 12:00 PM EDT
Local
May 19 Fri, 10:30 AM — 12:00 PM EDT
Location
Babbio 104

Target Coverage and Connectivity in Directional Wireless Sensor Networks

Tan D Lam and Dung Huynh (University of Texas at Dallas, USA)

0
This paper discusses the problem of deploying a minimum number of directional sensors equipped with directional sensing units and directional communication antennas with beam-width \(\theta_c \geq \frac{\pi}{2}\) such that the set of sensors covers a set of targets $P$ in the 2D plane and the set of sensors forms a symmetric connected communication graph. As this problem is NP-hard, we propose an approximation algorithm that uses up to 3.5 times the number of omni-directional sensors required by the currently best approximation algorithm proposed by Han~\etal~\cite{han2008DirectionalConnectivityCoverage}. This is a significant result since we have broken the barrier of \(2\pi/\frac{\pi}{2} = 4\) when switching from omni-directional sensors to directional ones. Moreover, we improve the approximation ratio of the Strip-based algorithm for the Geometric Sector Cover problem proposed by Han~\etal~\cite{han2008DirectionalConnectivityCoverage} from 9 to 7, and we believe that this result is of interest in the area of Computation Geometry. Extensive simulations show that our algorithms only require around 3 times the number of sensors used by Han~\etal's algorithm and significantly outperform other heuristics in practice.
Speaker Tan Lam (The University of Texas at Dallas)

Tan Lam is currently a PhD student in Computer Science at The University of Texas at Dallas. He got an honor bachelor degree in Computer Science from Ho Chi Minh city University of Science. His research interest is the design and analysis of combinatorial optimization algorithms in Wireless Sensor Networks.


Eywa: A General Approach for Scheduler Design in AoI Optimization

Chengzhang Li, Shaoran Li, Qingyu Liu, Thomas Hou and Wenjing Lou (Virginia Tech, USA); Sastry Kompella (NEXCEPTA INC, USA)

1
Age of Information (AoI) is a metric that can be used to measure the freshness of information. Since its inception, there have been active research efforts on designing scheduling algorithms to various AoI-related optimization problems. For each problem, typically a custom-designed scheduler was developed. Instead of following the (custom-design) path, we pursue a general framework that can be applied to design a wide range of schedulers to solve AoI-related optimization problems. As a first step toward this vision, we present a general framework---Eywa, that can be applied to construct high-performance schedulers for a family of AoI-related optimization problems, all sharing a common setting of an IoT data collection network. We show how to apply Eywa to solve two important AoI-related problems: to minimize the weighted sum of AoIs and to minimize the bandwidth requirement under AoI constraints. We show that for each problem, Eywa can either offer a stronger performance guarantee than the state-of-the-art algorithms or provide new results that are not available in the literature.
Speaker Chengzhang Li (Ohio State University)

Chengzhang is currently a postdoc at AI-EDGE Institute, Ohio State University, supervised by Prof. Ness Shroff. He received his Ph.D. degree in Computer Engineering from Virginia Tech in 2022, supervised by Prof. Tom Hou. He received his B.S. degree in Electronic Engineering from Tsinghua University in 2017. His current research interests are real-time scheduling in 5G, Age of Information (AoI), and machine learning in wireless networks.


Dynamic Resource Allocation for Deep Learning Clusters with Separated Compute and Storage

Mingxia Li (University of Science and Technology of China, China); Zhenhua Han (Microsoft Research Asia, China); Chi Zhang (University of Science and Technology of China, China); Ruiting Zhou (Southeast University, China); Yuanchi Liu and Haisheng Tan (University of Science and Technology of China, China)

0
The separation of storage and computing in modern cloud services eases the deployment of general applications. However, with the development of accelerators such as GPU/TPU, Deep Learning (DL) training is suffering from potential IO bottlenecks when loading data from storage clusters. Therefore, DL training jobs need either create local cache in the compute cluster to reduce the bandwidth demand or scale up the IO capacity with higher bandwidth cost. It is full of challenges to choose the best strategy due to the heterogeneous cache/IO preference of DL models, shared dataset among multiple jobs and dynamic GPU scaling of DL training. In this work, we exploit the job characteristics based on their training throughput, dataset size and scalability. For fixed GPU allocation of jobs, we propose CBA to minimize the training cost with a closed-form approach. For clusters that can automatically scale the GPU allocations of jobs, we extend CBA to AutoCBA to support diverse job utility functions and maximize social welfare within a limited budget. Extensive experiments with production traces validate that CBA and AutoCBA can reduce IO cost and improve total social welfare by up to 20.5% and 2.27×, respectively, over the state-of-the-art schedulers for DL training.
Speaker Mingxia Li(University of Science and Technology)

Mingxia Li is currently a postgraduate student in computer science at the University of Science and Technology of China. Her research interests lie in networking algorithm and systems. 


LIBRA: Contention-Aware GPU Thread Allocation for Data Parallel Training in High Speed Networks

Yunzhuo Liu, Bo Jiang and Shizhen Zhao (Shanghai Jiao Tong University, China); Tao Lin (Communication University of China, China); Xinbing Wang (Shanghai Jiaotong University, China); Chenghu Zhou (Chinese Academy of Sciences, China)

1
Overlapping gradient communication with backward computation is a popular technique to reduce communication cost in the widely adopted data parallel S-SGD training. However, the resource contention between computation and All-Reduce communication in GPU-based training reduces the benefits of overlap. With GPU cluster network evolving from low bandwidth TCP to high speed networks, more GPU resources are required to efficiently utilize the bandwidth, making the contention more noticeable. Existing communication libraries fail to account for such contention when allocating GPU threads and have suboptimal performance. In this paper, we propose to mitigate the contention by balancing the computation and communication time. We formulate an optimization problem that decides the communication thread allocation to reduce overall backward time. We develop a dynamic programming based near-optimal solution and extend it to co-optimize thread allocation with tensor fusion. We conduct simulated study and real-world experiment using an 8-node GPU cluster with 50Gb RDMA network training four representative DNN models. Results show that our method reduces backward time by 10%-20% compared with Horovod-NCCL, by 6%-13% compared with tensor-fusion-optimization-only methods. Simulation shows that our method achieves the best scalability with a training speedup of 1.2x over the best-performing baseline as we scale up cluster size.
Speaker Yunzhuo Liu (Shanghai Jiao Tong University)

Yunzhuo Liu received his B.S. degree from Shanghai Jiao Tong University, where he is currently pursuing the Ph.D. degree in John Hopcroft Center. He has published papers in top-tier conferences, including SIGMETRICS, INFOCOM, ACM MM and ICNP. His research interests include distributed training and programmable networks.


Session Chair

Ben Liang

Session C-8

Network Applications

Conference
10:30 AM — 12:00 PM EDT
Local
May 19 Fri, 10:30 AM — 12:00 PM EDT
Location
Babbio 202

Latency-First Smart Contract: Overclock the Blockchain for a while

Huayi Qi, Minghui Xu and Xiuzhen Cheng (Shandong University, China); Weifeng Lv (Beijing University of Aeronautics and Astronautics, China)

0
The blockchain system has a limited throughput to proceed with transactions, and sometimes gets overwhelmed by a great number of transactions. Latency-sensitive users have to bid against each other and pay more fees to make sure their transactions are processed in priority. However, the blockchain system does not keep busy all the time. In most of the time (76% in Ethereum), there is a lot of calculation power that gets wasted, during which fewer users are sending transactions. To rebalance the loads and reduce the latency for users, we propose the latency-first smart contract model that allows users to submit a commitment during the heavy-load time and then finish the rest work during their spare time. From the chain's view, the blockchain is "overclocked" shortly and then pays back. We propose a programming tool for our model and our experiment results show that applying our model reduces the latency time greatly in a heavy load.
Speaker Huayi Qi (Shandong University)

Huayi Qi received his bachelor's degree in computer science from Shandong University in 2020. He is working toward a Ph.D. degree in the School of Computer Science and Technology, Shandong University, China. His research interests include blockchain privacy and security.


On Design and Performance of Offline Finding Network

Tong Li (Renmin University of China, China); Jiaxin Liang (Huawei Technologies, China); Yukuan Ding (Hong Kong University of Science and Technology, Hong Kong); Kai Zheng (Huawei Technologies, China); Xu Zhang (Nanjing University, China); Ke Xu (Tsinghua University, China)

0
Recently, such industrial pioneers as Apple and Samsung have offered a new generation of offline finding network (OFN) that enables crowd search for missing devices without leaking private data. Specifically, OFN leverages nearby online finder devices to conduct neighbor discovery via Bluetooth Low Energy (BLE), so as to detect the presence of offline missing devices and report an encrypted location back to the owner via the Internet. The user
experience in OFN is closely related to the success ratio (possibility) of finding the lost device, where the latency of the prerequisite stage, i.e., neighbor discovery, matters. However, the crowd-sourced finder devices show diversity in scan modes due to different power modes or different manufacturers, resulting in local optima of neighbor discovery performance. In this paper, we present a brand-new broadcast mode called ElastiCast to deal with the scan mode diversity issues. ElastiCast captures the key features of BLE neighbor discovery and globally optimizes the broadcast mode interacting with diverse scan modes. Experimental evaluation results and commercial product deployment experience demonstrate that ElastiCast is effective in achieving stable and bounded neighbor discovery latency within the power budget.
Speaker Tong Li (Renmin University of China)

Tong Li is currently the faculty at the Renmin University of China. His research interests include networking, distributed systems, and big data.


WiseCam: Wisely Tuning Wireless Pan-Tilt Cameras for Cost-Effective Moving Object Tracking

Jinlong E (Renmin University of China, China); Lin He and Zhenhua Li (Tsinghua University, China); Yunhao Liu (Tsinghua University & The Hong Kong University of Science and Technology, China)

0
With the desirable functionality of moving object tracking, wireless pan-tilt cameras are playing critical roles in a growing diversity of surveillance environments. However, today's pan-tilt cameras oftentimes underperform when tracking frequently moving objects like humans -- they are prone to lose sight of objects and bring about excessive mechanical rotations that are especially detrimental to those energy-constrained outdoor scenarios. The ineffectiveness and high cost of state-of-the-art tracking approaches are rooted in their adherence to the industry's simplicity principle, which leads to their stateless nature, performing gimbal rotations based only on the latest object detection. To address the issues, this paper presents WiseCam that wisely tunes the pan-tilt cameras to minimize mechanical rotation costs while maintaining long-term object tracking with low overhead. It is achieved by object trajectory construction in a panoramic space and online rotating angle determination based on spatio-temporal motion information, together with adaptively adjusted rotation generation and execution. We implement WiseCam on two types of pan-tilt cameras with different motors. Real-world evaluations demonstrate that WiseCam significantly outperforms the state-of-the-art tracking approaches on both tracking duration and power consumption.
Speaker Jinlong E (Renmin University of China)

He is currently a lecturer at Renmin University of China. His current research interests include cloud computing, edge computing, and IoT.


Effectively Learning Moiré QR Code Decryption from Simulated Data

Yu Lu, Hao Pan, Guangtao Xue and Yi-Chao Chen (Shanghai Jiao Tong University, China); Jinghai He (University of California, Berkeley, China); Jiadi Yu (Shanghai Jiao Tong University, China); Feitong Tan (Simon Fraser University, Canada)

0
Moiré QR Code is a secure encrypted QR code system that can protect the user's QR code displayed on the screen from being accessed by attackers. However, conventional decryption methods based on image processing techniques suffer from intensive computation and significant decryption latency in practical mobile applications. In this work, we propose a deep learning-based Moiré QR code decryption framework and achieve an excellent decryption performance. Considering the sensitivity of the Moiré phenomenon, collecting training data in the real world is extremely labor and material intensive. To overcome this issue, we develop a physical screen-imaging Moire simulation methodology to generate a synthetic dataset which covers the entire Moiré-visible area. Extensive experiments show that the proposed decryption network can achieve a low decryption latency (0.02 seconds) and a high decryption rate (98.8%), compared with the previous decryption method with decryption latency (5.4 seconds) and decryption rate (98.6%).
Speaker Yu Lu (Shanghai Jiao Tong University)

Yu Lu is a Ph.D. student of computer science at Shanghai Jiao Tong University. His research interests focus on networked systems and span the areas of wireless communication and sensing, human-computer interaction, and computer vision. 


Session Chair

Qinghua Li

Session D-8

Edge Computing 2

Conference
10:30 AM — 12:00 PM EDT
Local
May 19 Fri, 10:30 AM — 12:00 PM EDT
Location
Babbio 210

Dynamic Regret of Randomized Online Service Caching in Edge Computing

Siqi Fan and I-Hong Hou (Texas A&M University, USA); Van Sy Mai (National Institute of Standards and Technology, USA)

0
This paper studies an online service caching problem, where an edge server, equipped with a prediction window of future service request arrivals, needs to decide which services to host locally subject to limited storage capacity. The edge server aims to minimize the sum of a request forwarding cost (i.e., the cost of forwarding requests to remote data centers to process) and a service instantiating cost (i.e., that of retrieving and setting up a service). Considering request patterns are usually non-stationary in practice, the performance of the edge server is measured by dynamic regret, which compares the total cost with that of the dynamic optimal offline solution. To solve the problem, we propose a randomized online algorithm with low complexity and theoretically derive an upper bound on its expected dynamic regret. Simulation results show that our algorithm significantly outperforms other state-of-the-art policies in terms of the runtime and expected total cost.
Speaker Siqi Fan (Texas A&M University)

My name is Siqi Fan, and I am a PhD candidate at Texas A&M University. My research focuses on Machine Learning, Online Optimization, and Edge Networking. 


SEM-O-RAN: Semantic and Flexible O-RAN Slicing for NextG Edge-Assisted Mobile Systems

Corrado Puligheddu (Politecnico di Torino, Italy); Jonathan Ashdown (United States Air Force, USA); Carla Fabiana Chiasserini (Politecnico di Torino & CNIT, IEIIT-CNR, Italy); Francesco Restuccia (Northeastern University, USA)

0
5G and beyond cellular networks (NextG) will support the continuous offloading of resource-expensive edge-assisted deep learning (DL) tasks. To this end, RAN (Radio Access Network) resources will need to be carefully "sliced" to satisfy heterogeneous application requirements while minimizing RAN usage. Existing slicing frameworks treat each DL task as equal and inflexibly define the required resources, which leads to sub-optimal performance. This work proposes SEM-O-RAN, the first semantic and flexible slicing framework for NextG Open RANs. Our key intuition is that different DL classifiers tolerate different levels of image compression, due to the semantic nature of the target classes. Therefore, compression can be semantically applied so that the networking load can be minimized. Moreover, flexibility allows SEM-O-RAN to consider multiple resource allocations leading to the same task-related performance, which allows for significantly more allocated tasks. First, we mathematically formulate the Semantic Flexible Edge Slicing Problem, demonstrate that it is NP-hard, and provide an approximation algorithm to solve it efficiently. Then, we evaluate the performance of SEM-O-RAN through extensive numerical analysis with state-of-the-art DL models, as well as real-world experiments on the Colosseum testbed. Our results show that SEM-O-RAN allocates up to 169% more tasks than the state of the art.
Speaker Corrado Puligheddu (Polytechnic University of Turin)

Corrado Puligheddu is an Assistant Professor at Politecnico di Torino, Turin, Italy, where he obtained his Ph.D. in Electrical, Electronics, and Communication Engineering in 2022. His research interests include 5G networks, Open RAN and Machine Learning.


Joint Task Offloading and Resource Allocation in Heterogeneous Edge Environments

Yu Liu, Yingling Mao, Zhenhua Liu, Fan Ye and Yuanyuan Yang (Stony Brook University, USA)

1
Mobile edge computing is becoming one of the ubiquitous computing paradigms to support applications requiring low latency and high computing capability. FPGA-based reconfigurable accelerators have high energy efficiency and low latency compared to general-purpose servers. Therefore, it is natural to incorporate reconfigurable accelerators in mobile edge computing systems. This paper formulates and studies the problem of joint task offloading, access point selection, and resource allocation in heterogeneous edge environments for latency minimization. Due to the heterogeneity in edge computing devices and the coupling between offloading, access point selection, and resource allocation decisions, it is challenging to optimize over them simultaneously. We decomposed the proposed problem into two disjoint subproblems and developed algorithms for them. The first subproblem is to jointly determine offloading and computing resource allocation decisions and has no polynomial-time approximation algorithm, where we developed an algorithm based on semidefinite relaxation. The second subproblem is to jointly determine access point selection and communication resource allocation decisions, where we proposed an algorithm with a provable approximation ratio of 2.62. We conducted extensive numerical simulations to evaluate the proposed algorithms. Results highlighted that the proposed algorithms outperformed baselines and were near-optimal over a wide range of settings.
Speaker Yu Liu

Yu Liu received his B. Eng. degree in Telecommunication Engineering from Xidian University, Xi'an, China. He is now pursuing his Ph.D. degree in Computer Engineering at Stony Brook University. His research interests are in online algorithms and edge computing, with a focus on the placement and resource management of virtual network functions and the reliability of service function chains.


Latency-Optimal Pyramid-based Joint Communication and Computation Scheduling for Distributed Edge Computing

Quan Chen and Kaijia Wang (Guangdong University of Technology, China); Song Guo (The Hong Kong Polytechnic University, Hong Kong); Tuo Shi (Tianjin University, China); Jing Li (The Hong Kong Polytechnic University, Hong Kong); Zhipeng Cai (Georgia State University, USA); Albert Zomaya (The University of Sydney, Australia)

0
By combing edge computing and parallel computing, distributed edge computing has emerged as a new paradigm to accelerate computation at the edge. Considering the parallelism of both computation and communication, the problem of Minimum Latency joint Communication and Computation Scheduling (MLCCS) is studied recently. However, existing works have rigid assumptions that the communication time of each device is fixed and the workload can be split arbitrarily small. Aiming at making the work more practical and general, the MLCCS problem without the above assumptions is studied in this paper. Firstly, the MLCCS problem under a general model is formulated and proved to be NP-hard. Secondly, a pyramid-based computing model is proposed to consider the parallelism of communication and computation jointly, which has an approximation ratio of 1 + δ, where δ is related to devices' communication rates. An interesting property under such computing model, i.e., the optimal latency can be obtained under arbitrary scheduling order when all devices have the same communication rate, is identified and proved. Additionally, when the workload cannot be split arbitrarily, an approximation algorithm with ratio of at most 2(1+δ) is proposed. Finally, both the simulation results and testbed experiments verify the high performance of proposed methods.
Speaker Quan Chen(Guangdong University of Technology)

Quan Chen received his BS, Master and PhD degrees in the School of Computer Science and Technology at Harbin Institute of Technology, China. He is currently an associate professor in the School of Computers at Guangdong University of Technology. In the past, he worked as a postdoctoral research fellow in the Department of Computer Science at Georgia State University. His research interests include wireless communication,networking and distributed edge computing.


Session Chair

György Dán

Session E-8

Video and Web Applications

Conference
10:30 AM — 12:00 PM EDT
Local
May 19 Fri, 10:30 AM — 12:00 PM EDT
Location
Babbio 219

Owl: A Pre-and Post-processing Framework for Video Analytics in Low-light Surroundings

Rui-Xiao Zhang, Chaoyang Li, Chenglei Wu, Tianchi Huang and Lifeng Sun (Tsinghua University, China)

0
The low-light environment is an integral surrounding in real-world video analytic applications. Conventional wisdom claims that in order to adapt to the extensive computation requirement of the analytics model and achieve high inference accuracy, the overall pipeline should leverage a client-to-cloud framework that designs a cloud-based inference with on-demand video streaming. However, we show that due to the amplified noise, directly streaming the video in low-light scenarios can introduce significant bandwidth inefficiency.
In this paper, we propose Owl, an intelligent framework to optimize the bandwidth utilization and inference accuracy for the low-light video analytic pipeline. The core idea of Owl is two-fold. On the one hand, we will deploy a light-weighted pre-processing module before transmission, through which we will get the denoised video and significantly reduce the transmitted data; on the other hand, we recover the information from the denoised video via a DNN-based enhancement module in the server-side. Specifically, through content-aware feature clustering and task-oriented fine-tuning, Owl can well coordinate the front-end and back-end, and intelligently determine the best denoise level and corresponding enhancement model for different videos. Experiments with a variety of datasets and tasks show that Owl achieves significant bandwidth benefits, while consistently optimizing the inference accuracy.
Speaker Rui-Xiao Zhang (Tsinghua University)

Rui-Xiao Zhang received his B.E and Ph.D degrees from Tsinghua University in 2013 and 2017, repectively. Currently, he is a Post-doctoral fellow in the University of Hong Kong. His research interests lie in the area of content delivery networks, the optimization of multimedia streaming, and the machine learning for systems. He has published more than 20 papers in top conference including ACM Multimedia, IEEE INFOCOM. He also serves as the reviewer for JSAC, TCSVT, TMM, TMC. He has received the Best Student Paper Awards presented by ACM Multimedia System Workshop in 2019.


AccDecoder: Accelerated Decoding for Neural-enhanced Video Analytics

Tingting Yuan (Georg-August-University of Göttingen, Germany); Liang Mi (Nanjing University, China); Weijun Wang (Nanjing University & University of Goettingen, China); Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China); Xiaoming Fu (University of Goettingen, Germany)

0
The quality of the video stream is key to neural network-based video analytics. However, low-quality video is inevitably collected by existing surveillance systems because of poor quality cameras or over-compressed/pruned video streaming protocols, e.g., as a result of upstream bandwidth limit. To address this issue, existing studies use quality enhancers (e.g., neural super-resolution) to improve the quality of videos (e.g., resolution) and eventually ensure inference accuracy. Nevertheless, directly applying quality enhancers does not work in practice because it will introduce unacceptable latency. In this paper, we present AccDecoder, a novel accelerated decoder for real-time and neural-enhanced video analytics. AccDecoder can select a few frames adaptively via Deep Reinforcement Learning (DRL) to enhance the quality by neural super-resolution and then up-scale the unselected frames that reference them, which leads to 6-21% accuracy improvement. AccDecoder provides efficient inference capability via filtering important frames using DRL for DNN-based inference and reusing the results for the other frames via extracting the reference relationship among frames and blocks, which results in a latency reduction of 20-80% than baselines.
Speaker Tingting Yuan (University of Göttingen)

Dr. Tingting Yuan ([email protected]) is a junior professor with the Institute of Computer Science at University of Göttingen, Germany. She received her Ph.D. degree from Beijing University of Posts and Telecommunications (BUPT), Beijing, China, in 2018. During the year 2018-2020, she was a postdoctor at INRIA, Sophia Antipolis, France. Since 2020, she joined the University of Göttingen as a senior postdoctor with a Humboldt scholarship. Her current interests are in next-generation networks, including software-defined networking, reinforcement learning, vehicular ad-hoc networks, and so on. She has published more than 20 peer-reviewed papers including IEEE INFOCOM, AAAI, IEEE Network, IEEE TNSM, etc. She served as a TPC member of GLOBECOM, NoF, etc.


Crow API: Cross-device I/O Sharing in Web Applications

Seonghoon Park and Jeho Lee (Yonsei University, Korea (South)); Hojung Cha (Yonsei University, S. Korea, Korea (South))

1
Although cross-device input/output (I/O) sharing is useful for users who own multiple computing devices, previous solutions had a platform-dependency problem. The meta-platform characteristics of web applications could provide a viable solution. In this paper, we propose the Crow application programming interface (API) that allows web applications to access other devices' I/O through standard web APIs without modifying operating systems or browsers. The provision of cross-device I/O should resolve two key challenges. First, the web environment lacks support for device discovery when making a device-to-device connection. This requires a significant effort for developers to implement and maintain signaling servers. To address this challenge, we propose a serverless Crow connectivity mechanism using devices' I/O-specific communication schemes. Second, JavaScript runtimes have limitations in supporting cross-device inter-process communication (IPC). To solve the problem, we propose a web IPC scheme, called Crow IPC, which introduces a proxy interface that relays the cross-device IPC connection. Crow IPC also provides a mechanism for ensuring functional consistency. We implemented the Crow API as a JavaScript library with which developers can easily develop their applications. An extensive evaluation showed that the Crow API provides cross-device I/O sharing functionality effectively and efficiently on various web applications and platforms.
Speaker Seonghoon Park (Yonsei University)

Seonghoon Park is currently working toward the Ph.D. degree in computer science at Yonsei University, Seoul, South Korea. His research interests include mobile web experiences, on-device machine learning, and energy-aware mobile systems.


Rebuffering but not Suffering: Exploring Continuous-Time Quantitative QoE by User's Exiting Behaviors

Sheng Cheng, Han Hu, Xinggong Zhang and Zongming Guo (Peking University, China)

0
Quality of Experience (QoE) is one of the most important quality indicators for video streaming applications. But it is still an open question how to assess QoE value objectively and quantitatively over continuous time both for academia and industry. In this paper, we carry out extensive data study on user behaviors in one of the largest short-video service providers. The measurement data reveals that user's exiting behavior in viewing short video stream is an appropriate choice as continuous time QoE metric. Secondly, we build a quantitative QoE model to objectively assess the quality of short-video playback by discretizing playback session into state chain. By collecting 7 billion viewing session logs which cover users from 60 countries and regions, 40 CDN providers and 120 Internet service providers around the world, the proposed state-chain-based model of State-Exiting Ratio (SER) is validated. The experimental results show that the modeling error of SER and session duration are less than 2% and 10s respectively. By using the proposed scheme to optimize adaptive video streaming, the average session duration is improved up to 60% to baseline, and 20% to the existing black-box-like machine learning methods.
Speaker Sheng Cheng (Peking University), Xinggong Zhang (Peking University)

Sheng Cheng received the bachelor's degree from Peking University, Beijing, China, in 2020. He is currently pursuing the M.S. degree from Wangxuan Institute of Computer Technology, Peking University.

His research interests lie in real-time video streaming, adaptive forward error correction for communication and video quality assessment. He is also interested in the application of Artificial Intelligence in network systems.


Xinggong Zhang (Senior Member, IEEE) received the Ph.D. degree from the Department of Computer Science, Peking University, Beijing, China, in 2011.

He is currently an Associate Professor at Wangxuan Institute of Computer Technology, Peking University. Before that, he was Senior Researcher at Founder Research and Development Center, Peking University from 1998 to 2007. He was a Visiting Scholar with the Polytechnic Institute of New York University from 2010 to 2011. His research interests lie in the modeling and optimization of multimedia networks, VR/AR/video streaming and satellite networks.


Session Chair

Xuyu Wang

Session F-8

Network Design and Fault Tolerance

Conference
10:30 AM — 12:00 PM EDT
Local
May 19 Fri, 10:30 AM — 12:00 PM EDT
Location
Babbio 220

Distributed Demand-aware Network Design using Bounded Square Root of Graphs

Or Peres (Ben Gurion University, Israel); Chen Avin (Ben-Gurion University of the Negev, Israel)

2
While the traditional design of network topologies is demand oblivious, recent advances in reconfigurable networks enable real-time and dynamic communication network topologies, e.g., in datacenter networks. This trend motivates a new paradigm where topologies can adjust to the demand they need to serve. We consider the static version of this network design problem where the input is a requests distribution, D (demand-matrix), and a bound ∆, on the maximum degree of the output topology. In turn, the objective is to design an (undirected) demand-aware network N of bounded-degree ∆, which minimizes the expected path length (with respect to D).

This paper draws a connection between the k-root of graphs and the network design problem and uses forests-decomposition of the demand as the primary methodology. In turn, we provide new algorithms for demand-aware network design, including cases where our algorithms are (order) optimal and improve previous results. In addition, we provide, for the first time and for the case of bounded arboricity, i) an efficient distributed algorithm for the CONGEST model and ii) an efficient and PRAM-based parallel algorithm. We also present empirical results on real-world demand matrices where our algorithms produce both low-degree, and low expected path length network designs.
Speaker Chen Avin

Chen Avin is a Professor at the School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Israel. He received his MSc and Ph.D. in computer science from the University of California, Los Angeles (UCLA) in 2003 and 2006. Recently he served as the chair of the Communication Systems Engineering department at BGU. His current research interests are data-driven graphs and network algorithms, modeling, and analysis, emphasizing demand-aware networks, distributed systems, social networks, and randomized algorithms for networking.


A Fast and Exact Evaluation Algorithm for the Expected Number of Connected Nodes: an Enhanced Network Reliability Measure

Kengo Nakamura (NTT Corporation & Kyoto University, Japan); Takeru Inoue (NTT Network Innovation Labs., Japan); Masaaki Nishino and Norihito Yasuda (NTT Comunication Science Laboratories, Japan); Shin-ichi Minato (Kyoto University, Japan)

0
Contemporary society survives on several network infrastructures, such as communication and transportation. These network infrastructures are required to keep all nodes connected, although these nodes are occasionally disconnected due to failures. Thus, the expected number of connected node pairs (ECP) during an operation period is a reasonable reliability measure in network design. However, no work has studied ECP due to its computational hardness; we have to solve the reliability evaluation problem, which is a computationally tough problem, for O(n2) times where n is the number of nodes in a network.

This paper proposes an efficient method that exactly computes ECP. Our method performs dynamic programming just once without explicit repetition for each node pair and obtains an exact ECP value weighted by the number of users at each node. A thorough complexity analysis reveals that our method is faster than an existing reliability evaluation method, which can be transferred to ECP computation, by \(O(n)\). Numerical experiments using real topologies show great efficiency; e.g., our method computes the ECP of an 821-link network in ten seconds; the existing method cannot complete it in an hour. This paper also presents two applications: critical link identification and optimal resource (e.g., a server) placement.
Speaker Kengo Nakamura (NTT Corporation & Kyoto University)

Kengo Nakamura received the B.E. and M.E. degrees in information science and technology from the University of Tokyo, Japan, in 2016 and 2018. I am currently working as a researcher at NTT Communication Science Laboratories, Japan, and pursuing the Ph.D. degree from Kyoto University, Japan.


Network Slicing: Market Mechanism and Competitive Equilibria

Panagiotis Promponas and Leandros Tassiulas (Yale University, USA)

0
Towards addressing spectral scarcity and enhancing resource utilization in 5G networks, network slicing is a promising technology to establish end-to-end virtual networks without requiring additional infrastructure investments. By leveraging Software Defined Networks (SDN) and Network Function Virtualization (NFV), we can realize slices completely isolated and dedicated to satisfy the users' diverse Quality of Service (QoS) prerequisites and Service Level Agreements (SLAs). This paper focuses on the technical and economic challenges that emerge from the application of the network slicing architecture to real-world scenarios. We consider a market where multiple Network Providers (NPs), own the physical infrastructure and offer their resources to multiple Service Providers (SPs). Then, the SPs offer those resources as slices to their associated users. We propose a holistic iterative model for the network slicing market along with a clock auction that converges to a robust \(\epsilon\)-competitive equilibrium. At the end of each cycle of the market, the slices are reconfigured and the SPs aim to learn the private parameters of their users. Numerical results are provided that validate and evaluate the convergence of the clock auction and the capability of the proposed market architecture to express the incentives of the different entities of the system.
Speaker Panagiotis Promponas

Panagiotis Promponas (Graduate Student Member, IEEE) received the Diploma degree in electrical and computer engineering (ECE) from the National Technical University of Athens (NTUA), Greece, in 2019. He is currently a PhD student in the Electrical Engineering department at Yale University. Primarily, his research interests center around the field of resource allocation in constrained interdependent systems, with particular applications in the areas of quantum networks and wireless networks. He was a recipient of the Best Paper Award at the 12th IFIP WMNC 2019.


Tomography-based Progressive Network Recovery and Critical Service Restoration after Massive Failures

Viviana Arrigoni, Matteo Prata and Novella Bartolini (Sapienza University of Rome, Italy)

1
Massive failures in communication networks are a consequence of natural disasters, heavy blackouts, military and cyber attacks. We tackle the problem of minimizing the time and number of interventions to sufficiently restore the communication network so as to support emergency services after large-scale failures. We propose PRoTON (Progressive RecOvery and Tomography-based mONitoring), an efficient algorithm for progressive recovery of emergency services. Unlike previous work, assuming centralized routing and complete network observability, PRoTON addresses the more realistic scenario in which the network relies on the existing routing protocols, and knowledge of the network state is partial and uncertain. Simulation results carried out on real topologies show that our algorithm outperforms previous solutions in terms of cumulative routed flow, repair costs and recovery time in both static and dynamic failure scenarios.
Speaker Viviana Arrigoni (Sapienza University of Rome)

Viviana is a research fellow at the Department of Computer Science at Sapienza, University of Rome. She got her PhD from the same department and authored several research papers. Her research interests are networking, network monitoring and computational linear algebra.


Session Chair

Gabor Retvari

Session Lunch-Day3

Conference Lunch

Conference
12:00 PM — 12:30 PM EDT
Local
May 19 Fri, 12:00 PM — 12:30 PM EDT
Location
Univ. Center Complex TechFlex & Patio

Session A-9

IoT

Conference
1:30 PM — 3:00 PM EDT
Local
May 19 Fri, 1:30 PM — 3:00 PM EDT
Location
Babbio 122

Enable Batteryless Flex-sensors via RFID Tags

Mengning Li (North Carolina State University, USA)

0
Detection of flex-angle of objects or human bodies can benefit various scenarios such as robotic arm control, medical rehabilitation, and deformation detection. However, two common solutions including flex sensors and computer vision methods inevitably have the following limitations: (i) the battery-powered flex sensors have limited system lifetime; (ii) computer vision methods fail in Non-Line-of-Sight (NLoS) scenarios. To overcome these limitations, we for the first time propose an RFID-based Flex-sensor (RFlexor) system to enable flex-angle detection in a batteryless manner for NLoS. The basic insight of RFlexor is that flexing tag will affect the tag hardware characteristics and further change the tag phase and Received Signal Strength Indicator (RSSI) received by the reader antenna. To figure out the relationship between phase/RSSI and its flex-angle, we train a multi-input AI model in advance. By feeding the processed data into the model, we can accurately detect the tag flex-angles. We use Commercial-Off-The-Shelf (COTS) RFID devices to implement the RFlexor system. Extensive experiments reveal that RFlexor can achieve fine-grained flex-angle detection results, e.g., the detection error is less than 10 degrees with a probability higher than 90% at most conditions; and the average detection error is always less than 10 degrees across all experiments.
Speaker Mengning Li

Mengning Li is a first-year Ph.D. student at North Carolina State University, where she is fortunate to be advised by Prof. Wenye Wang. Her research interest mainly lies in wireless sensing.


TomoID: A Scalable Approach to Device Free Indoor Localization via RFID Tomography

Yang-Hsi Su and Jingliang Ren (University of Michigan, USA); Zi Qian (Tsinghua University, China); David Fouhey and Alanson Sample (University of Michigan, USA)

0
Device-free localization methods allow users to benefit from location-aware services without the need to carry a transponder. However, conventional radio sensing approaches using active wireless devices require wired power or continual battery maintenance, limiting deployability. We present TomoID, a real-time multi-user UHF RFID tomographic localization system that uses low-level communication channel parameters such as RSSI, RF Phase, and Read Rate to create probability heatmaps of users' locations. The heatmaps are passed to our custom-designed signal processing and machine learning pipeline to robustly predict users' locations. Results show that TomoID is highly accurate, with an average mean error of 17.1 cm for a stationary user and 18.9 cm when users are walking. With multi-user tracking, results showing an average mean error of 70.0 cm for five individuals in constant motion. Importantly, TomoID is specifically designed to work in real-world multipath-rich indoor environments. Our signal processing and machine learning pipeline allows a pre-trained localization model to be applied to new environments of different shapes and sizes, while maintaining good accuracy sufficient for indoor user localization and tracking. Ultimately, TomoID enables a scalable, easily deployable, and minimally intrusive method for locating uninstrumented users in indoor environments.
Speaker Yang-Hsi Su (University of Michigan - Ann Arbor)

A 3rd year PhD student in the Interactive Sensing and Computing Lab lead by Prof. Alanson Sample at the University of Michigan. Mainly focuses on RF sensing and RF localization.


Extracting Spatial Information of IoT Device Events for Smart Home Safety Monitoring

Yinxin Wan, Xuanli Lin, Kuai Xu, Feng Wang and Guoliang Xue (Arizona State University, USA)

0
Smart home IoT devices have been widely deployed and connected to many home networks for various applications such as intelligent home automation, connected healthcare, and security surveillance. The informative network traffic trace generated by IoT devices has enabled recent research advances on smart home network measurement. However, due to the cloud-based communication model of smart home IoT devices and lack of traffic data collected at the could end, little effort has been devoted to extracting the spatial information of IoT device events to determine where a device event is triggered. In this paper, we examine why extracting the device events' spatial information is challenging by analyzing the communication model of the smart home IoT system. We then propose a system named IoTDuet for determining whether a device event is triggered locally or remotely by utilizing the fact that the controlling device such as smartphones and tablets always communicate with cloud servers with static domain names when issuing commands from the home network. We further show the importance of extracting the critical spatial information of IoT device events by exploring its applications in smart home safety monitoring.
Speaker Yinxin Wan (Arizona State University)

Yinxin Wan is a final-year Ph.D. candidate majoring in Computer Science at Arizona State University. He obtained his B.E. degree from the University of Science and Technology of China in 2018. His research interests include cybersecurity, IoT, network measurement, and data-driven networked systems.


RT-BLE: Real-time Multi-Connection Scheduling for Bluetooth Low Energy

Yeming Li and Jiamei Lv (Zhejiang University, China); Borui Li (Southeast University, China); Wei Dong (Zhejiang University, China)

0
Bluetooth Low Energy (BLE) is one of the popular wireless protocols to build IoT applications. However, the BLE suffers from three major issues that make it unable to provide reliable service to time-critical IoT applications. First, the BLE operates in the crowded 2.4GHz frequency band, which can lead to a high packet loss rate. Second, it is common for one device to connect with multiple BLE Peripherals, which can lead to severe collision issue. Third, there is a long delay to re-allocate time resource. In this paper, we propose RT-BLE: a real-time multi-connection scheduling scheme for BLE. We first formulate the BLE transmission latency in noisy RF environments considering the BLE retransmission mechanism. With this, RT-BLE can get a set of initial connection parameters. Then, RT-BLE uses collision tree based time resource scheduling technology to efficiently manage time resource. Finally, we propose a subrating-based fast connection re-scheduling method to update the connection parameters and the position of anchor points. The result shows RT-BLE can provide reliable service and the error of our model is less than 0.69%. Compare with existing works, the re-scheduling delay is reduced by up to 86.25% and the capacity is up to 4.33x higher.
Speaker Yeming Li (Zhejiang University)

Yeming Li received the B.S. degree in computer science from Zhejiang University of Technoligy, in 2020.

He is currently pursuing the Ph.D. degree with Zhejiang University.

His research interests include Internet of Things and wireless protocols.


Session Chair

Gianluca Rizzo

Session B-9

Wireless Systems

Conference
1:30 PM — 3:00 PM EDT
Local
May 19 Fri, 1:30 PM — 3:00 PM EDT
Location
Babbio 104

NFChain: A Practical Fingerprinting Scheme for NFC Tag Authentication

Yanni Yang (Shandong University, China); Jiannong Cao (Hong Kong Polytechnical University, Hong Kong); Zhenlin An (The Hong Kong Polytechnic University, Hong Kong); Yanwen Wang (Hunan University, China); Pengfei Hu and Guoming Zhang (Shandong University, China)

0
NFC tag authentication is highly demanded to avoid tag abuse. Recent fingerprinting methods employ the physical-layer signal, which embeds the tag hardware imperfections for authentication. However, existing NFC fingerprinting methods suffer from either low scalability for a large number of tags or incompatibility with NFC protocols, impeding the practical application of NFC authentication systems. To fill this gap, we propose NFChain, a new NFC fingerprinting scheme that excavates the tag hardware uniqueness from the protocol-agnostic tag response signal. Specifically, we harness an agile and compatible frequency band of NFC to extract the tag fingerprint from a chain of tag responses over multiple frequencies, which significantly improves the fingerprint scalability. However, extracting the desired fingerprint encounters two practical challenges: (1) fingerprint inconsistency under different device configurations and (2) fingerprint variations across multiple measurements of the same tag due to noises in the generic device. To tackle these challenges, we first design an effective nulling method to eliminate the effect of device configurations. Second, we employ contrastive learning to reduce fingerprint variations for accurate authentication. Extensive experiments show we can achieve as low as 3.7% FRR and 4.1% FAR for over 600 tags.
Speaker Zhenlin An (The Hong Kong Polytechnic University)

Zhenlin An is a postdoc from The Hong Kong Polytechnic University. His research interests are in wireless sensing and communication, metasurface, and low-power IoT systems. He is currently on the job market.


ICARUS: Learning on IQ and Cycle Frequencies for Detecting Anomalous RF Underlay Signals

Debashri Roy (Northeastern University, USA); Vini Chaudhary (Northeastern University, Boston, MA, US, USA); Chinenye Tassie (Northeastern University, USA); Chad M Spooner (NorthWest Research Associates, USA); Kaushik Chowdhury (Northeastern University, USA)

0
The RF environment in a secure space can be compromised by intentional transmissions of hard-to-detect underlay signals that overlap with a high-power baseline transmission. Specifically, we consider the case where a direct sequence spread spectrum (DSSS) signal is the underlay signal hiding within a baseline 4G Long-Term Evolution (LTE) signal. As compared to overt actions like jamming, the DSSS signal allows the LTE signal to be decodable, which makes it hard to detect. ICARUS presents a machine learning based framework that offers choices at the physical layer for inference with inputs of (i) IQ samples only, (ii) cycle frequency features obtained via cyclostationary signal processing (CSP), and (iii) fusion of both, to detect the underlay DSSS signal and its modulation type within LTE frames. ICARUS chooses the best inference method considering both the expected accuracy and the computational overhead. ICARUS is rigorously validated on multiple real-world datasets that include signals captured in cellular bands in the wild and the NSF POWDER testbed for advanced wireless research (PAWR). Results reveal that ICARUS can detect DSSS anomalies and its modulation scheme with 98-100% and 67−99% accuracy, respectively, while completing inference within 3−40 milliseconds on an NVIDIA A100 GPU platform.
Speaker Debashri Roy (Northeastern University)

Debashri Roy is an associate research scientist at the Department of Electrical and Computer Engineering, Northeastern University. She received her Ph.D. in Computer Science from the University of Central Florida in May 2020. Her research interests involve machine learning based applications in wireless communication domain, targeted to the areas of deep spectrum learning, millimeter wave beamforming, multimodal fusion, networked robotics for next-generation communication. 


WSTrack: A Wi-Fi and Sound Fusion System for Device-free Human Tracking

Yichen Tian, Yunliang Wang, Ruikai Zheng, Xiulong Liu, Xinyu Tong and Keqiu Li (Tianjin University, China)

0
The voice assistants benefit from the ability to localize users, especially, we can analyze the user's habits from the historical trajectory to provide better services. However, current voice localization method requires the user to actively issue voice commands, which makes voice assistants unable to track silent users most of the time. This paper presents WSTrack, a Wi-Fi and Sound fusion tracking system for device-free human. In particular, current voice assistants naturally support both Wi-Fi and acoustic functions. Accordingly, we are able to build up the multi-modal prototype with just voice assistants and Wi-Fi routers. To track the movement of silent users, our insights are as follows: (1) the voice assistants can hear the sound of the user's pace, and then extract which direction the user is in; (2) we can extract the user's velocity from the Wi-Fi signal. By fusing multi-modal information, we are able to track users with a single voice assistant and Wi-Fi router. Our implementation and evaluation on commodity devices demonstrate that WSTrack achieves better performance than current systems, where the median tracking error is 0.37m.
Speaker Yichen Tian (Tianjin University)

Yichen Tian is currently working toward the master’s degree at the College of Intelligence and Computing, Tianjin University, China. His research interests include wireless sensing and indoor localization.


SubScatter: Subcarrier-Level OFDM Backscatter

Jihong Yu (Beijing Institute of Technology, China); Caihui Du (Beijing Institute of Techology, China); Jiahao Liu (Beijing Institute of Technology, China); Rongrong Zhang (Capital Normal University, China); Shuai Wang (Beijing Institute of Technology, China)

0
OFDM backscatter is crucial in passive IoT. Most of the existing works adopt phase-modulated schemes to embed tag data, which suffer from three drawbacks: symbol-level modulation limitation, heavy synchronization accuracy reliance, and small symbol time offset (STO) / carrier frequency (CFO) offset tolerability. We introduce SubScatter, the first subcarrier-level frequency-modulated OFDM backscatter which is able to tolerate bigger synchronization errors, STO, and CFO. The unique feature that sets SubScatter apart from the other backscatter systems is our subcarrier shift keying (SSK) modulation. This method pushes the modulation granularity to the subcarrier by encoding and mapping tag data into different subcarrier patterns. We also design a tandem frequency shift (TFS) scheme that enables SSK with low cost and low power. For decoding, we propose a correlation-based method that decodes tag data from the correlation between the original and backscatter OFDM symbols. We prototype and test SubScatter under 802.11g OFDM WiFi signals. Comprehensive evaluations show that our SubScatter outstands prior works in terms of effectiveness and robustness. Specifically, SubScatter has 743kbps throughput, 3.1x and 14.9x higher than RapidRider and MOXcatter, respectively. It also has a much lower BER under noise and interferences, which is over 6x better than RapidRider or MOXcatter.
Speaker Jihong Yu (Beijing Institute of Technology)

Jihong Yu received the B.E degree in communication engineering and M.E degree in communication and information systems from Chongqing University of Posts and Telecommunications, Chongqing, China, in 2010 and 2013, respectively, and the Ph.D. degree in computer science at the University of Paris-Sud, Orsay, France, in 2016. He was a postdoc fellow in the School of Computing Science, Simon Fraser University, Canada. He is currently a professor in the School of Information and Electronics at Beijing Institute of Technology. His research interests include backscatter networking, Internet of things, and Space-air communications. He is serving as an Area Editor for the Elsevier Computer Communications and an Associate Editor for the IEEE Internet of Things Journal and the IEEE Transactions on Vehicular Technology. He received the Best Paper Award at the IEEE Global Communications Conference (GLOBECOM) 2020.


Session Chair

Alex Sprintson

Session C-9

Crowdsourcing

Conference
1:30 PM — 3:00 PM EDT
Local
May 19 Fri, 1:30 PM — 3:00 PM EDT
Location
Babbio 202

Multi-Objective Order Dispatch for Urban Crowd Sensing with For-Hire Vehicles

Jiahui Sun, Haiming Jin, Rong Ding and Guiyun Fan (Shanghai Jiao Tong University, China); Yifei Wei (Carnegie Mellon University, USA); Lu Su (Purdue University, USA)

0
For-hire vehicle-enabled crowd sensing (FVCS) has become a promising paradigm to conduct urban sensing tasks in recent years. FVCS platforms aim to jointly optimize both the order-serving revenue as well as sensing coverage and quality. However, such two objectives are often conflicting and need to be balanced according to the platforms' preferences on both objectives. To address this problem, we propose a novel cooperative multi-objective multi-agent reinforcement learning framework, referred to as MOVDN, to serve as the first preference-configurable order dispatch mechanism for FVCS platforms. Specifically, MOVDN adopts a decomposed network structure, which enables agents to make distributed order selection decisions, and meanwhile aligns each agent's local decision with the global objectives of the FVCS platform. Then, we propose a novel algorithm to train a single universal MOVDN that is optimized over the space of all preferences. This allows our trained model to produce the optimal policy for any preference. Furthermore, we provide the theoretical convergence guarantee and sample efficiency analysis of our algorithm. Extensive experiments on three real-world ride-hailing order datasets demonstrate that MOVDN outperforms strong baselines and can support the platform in decision-making effectively.
Speaker Haiming Jin (Shanghai Jiao Tong University)

I am currently a tenure-track Associate Professor in the Department of Computer Science and Engineering at Shanghai Jiao Tong University (SJTU). From August 2021 to December 2022, I was a tenure-track Associate Professor in the John Hopcroft Center (JHC) for Computer Science at SJTU. From September 2018 to August 2021, I was an assistant professor in JHC at SJTU. From June 2017 to June 2018, I was a Postdoctoral Research Associate in the Coordinated Science Laboratory (CSL) of University of Illinois at Urbana-Champaign (UIUC). I received my PhD degree from the Department of Computer Science of UIUC in May 2017, advised by Prof. Klara Nahrstedt. Before that, I received my Bachelor degree from the Department of Electronic Engineering of SJTU in July 2012.


AoI-aware Incentive Mechanism for Mobile Crowdsensing using Stackelberg Game

Mingjun Xiao, Yin Xu and Jinrui Zhou (University of Science and Technology of China, China); Jie Wu (Temple University, USA); Sheng Zhang (Nanjing University, China); Jun Zheng (University of Science and Technology of China, China)

0
Mobile CrowdSensing (MCS) is a mobile computing paradigm, through which a platform can coordinate a crowd of workers to accomplish large-scale data collection tasks using their mobile devices. Information freshness has attracted much focus on MCS research worldwide. In this paper, we investigate the incentive mechanism design in MCS that take the freshness of collected data and social benefits into concerns. First, we introduce the Age of Information (AoI) metric to measure the data freshness. Then, we model the incentive mechanism design with AoI guarantees as a novel incomplete information two-stage Stackelberg game with multiple constraints. Next, we derive the optimal strategies of this game to determine the optimal reward paid by the platform and the optimal data update frequency for each worker. Moreover, we prove that these optimal strategies form a unique Stackelberg equilibrium. Based on the optimal strategies, we propose an AoI-Aware Incentive (AIAI) mechanism, whereby the platform and workers can maximize their utilities simultaneously. Meanwhile, the system can ensure that the AoI values of all data uploaded to the platform are not larger than a given threshold to achieve high data freshness. Extensive simulations on real-world traces are conducted to demonstrate the significant performance of AIAI.
Speaker Yin Xu

Yin Xu received her B.S. degree from the School of Computer Science and Technology at the Anhui University (AHU), Hefei, China, in 2019. She is currently a PhD student in the School of Computer Science and Technology at the University of Science and Technology of China (USTC), Hefei, China. Her research interests include mobile crowdsensing, federated learning, privacy preservation, game theory, edge computing, and incentive mechanism design.


Crowd2: Multi-agent Bandit-based Dispatch for Video Analytics upon Crowdsourcing

Yu Chen, Sheng Zhang, Yuting Yan, Yibo Jin, Ning Chen and Mingtao Ji (Nanjing University, China); Mingjun Xiao (University of Science and Technology of China, China)

1
Many crowdsourcing platforms are emerging, leveraging the resources of recruited workers to execute various outsourcing tasks, mainly for those computing-intensive video analytics with high quality requirements. Although the profit of each platform is strongly related to the quality of analytics feedback, due to the uncertainty on diverse performance of workers and the conflicts of interest over platforms, it is non-trivial to determine the dispatch of tasks with maximum benefits. In this paper, we design a decentralized mechanism for a Crowd of Crowdsourcing platforms, denoted as Crowd2, optimizing the worker selection to maximize the social welfare of these platforms in a long-term scope, under the consideration of both proportional fairness and dynamic flexibility. Concretely, we propose a video analytics dispatch algorithm based on multi-agent bandit, for which the more accurate profit estimates are attained via the decoupling of multi-knapsack based mapping problem. Via rigorous proofs, a sub-linear regret bound for social welfare of crowdsourcing profits is achieved while both fairness and flexibility are ensured. Extensive trace-driven experiments demonstrate that Crowd2 improves the social welfare by 36.8%, compared with other alternatives.
Speaker Yu Chen (Nanjing University)

Yu Chen received the BS degree from the Department of Computer Science and Technology, Nanjing University, China, in 2019, where he is currently working toward the PhD degree under the supervision of associate professor Sheng Zhang. He is a member of the State Key Laboratory for Novel Software Technology. To date, he has published more than 10 papers, in journals such as TPDS and Journal of Software, and conferences such as INFOCOM, ICPP, IWQoS, ICC and ICPADS. His research interests include video analytics and edge computing.


Spatiotemporal Transformer for Data Inference and Long Prediction in Sparse Mobile CrowdSensing

En Wang, Weiting Liu and Wenbin Liu (Jilin University, China); Chaocan Xiang (Chongqing University, China); Bo Yang and Yongjian Yang (Jilin University, China)

0
Mobile CrowdSensing (MCS) is a data sensing model that recruits users carrying mobile terminals to collect data. As its variant, Sparse MCS has become a practical paradigm for large-scale and fine-grained urban sensing with the advantage of collecting only a few data to infer the full map. However, in many real-world scenarios, such as early prevention of epidemic, users are interested not only in inferring the entire sensing map, but also in long-term prediction the sensing map, where the latter is more important. Long-term prediction not only reduces sensing cost, but also identifies trends or other characteristics of the data. In this paper, we propose a novel model of Spatiotemporal Transformer (ST-transformer) to infer and predict the data with the sparse sensed data based on the spatiotemporal relationships. We design a spatiotemporal feature embedding to embed the prior spatiotemporal information of sensing map into the model to guide model learning. Moreover, we also design a multi-head spatiotemporal attention mechanism to dynamically capture spatiotemporal relationships among data. Extensive experiments have been conducted on three types of typical urban sensing tasks, which verify the effectiveness of our proposed algorithms in improving the inference and long-term prediction accuracy with the sparse sensed data.
Speaker Weiting Liu (Jilin University)

Weiting Liu received his bachelor’s degree in software engineering from Jilin University, Changchun, China, in 2020. Currently, he is studying for the master’s degree in computer science and technology from Jilin University, Changchun, China. His current research focuses on mobile crowdsensing, spatiotemporal data processing. 


Session Chair

Qinghua Li

Session D-9

Cross-technology Communications

Conference
1:30 PM — 3:00 PM EDT
Local
May 19 Fri, 1:30 PM — 3:00 PM EDT
Location
Babbio 210

Enabling Direct Message Dissemination in Industrial Wireless Networks via Cross-Technology Communication

Di Mu, Yitian Chen, Xingjian Chen and Junyang Shi (State University of New York at Binghamton, USA); Mo Sha (Florida International University, USA)

0
IEEE 802.15.4 based industrial wireless networks have been widely deployed to connect sensors, actuators, and gateway in industrial facilities. Although wireless mesh networks work satisfactorily most of the time thanks to years of research, they are often complex and difficult to manage once the networks are deployed. Moreover, the deliveries of time-critical messages suffer long delay, because all messages have to go through hop-by-hop transport. Recent studies show that adding a low-power wide-area network (LPWAN) radio to each device in the network can effectively overcome such limitations, because network management and time-critical messages can be transmitted from gateway to field devices directly through long-distance LPWAN links. However, industry practitioners have shown a marked reluctance to embrace the new solution because of the high cost of hardware modification. This paper presents a novel system, namely DIrect MEssage dissemination (DIME) system, that leverages the cross-technology communication technique to enable the direct message dissemination from gateway to field devices in industrial wireless networks without the need to add a second radio to each field device. Experimental results show that our system effectively reduces the latency of delivering time-critical messages and improves network reliability compared to a state-of-the-art baseline.
Speaker Mo Sha (Florida International University)

Dr. Mo Sha is an Associate Professor in the Knight Foundation School of Computing and Information Sciences at Florida International University (FIU). Before joining FIU, he was an Assistant Professor in Computer Science at the State University of New York at Binghamton. His research interests include wireless networking, Internet of Things, network security, and cyber-physical systems. He received the NSF CAREER award in 2021 and CRII award in 2017, published more than 50 research papers, served on the technical program committees of 20 premier conferences, and reviewed papers for 21 journals.


Breaking the Throughput Limit of LED-Camera Communication via Superposed Polarization

Xiang Zou (Xi'an jiaotong University, China); Jianwei Liu (Zhejiang University, China); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China)

0
With the popularity of LED infrastructure and the camera on smartphone, LED-Camera visible light communication (VLC) has become a realistic and promising technology. However, the existing LED-Camera VLC has limited throughput due to the sampling manner of camera. In this paper, by introducing a polarization dimension, we propose a hybrid modulation scheme with LED and polarization signals to boost throughput. Nevertheless, directly mixing LED and polarized signals may suffer from channel conflict. We exploit well-designed packet structure and Symmetric Return-to-Zero Inverted (SRZI) coding to overcome the conflict. In addition, in the demodulation of hybrid signal, we alleviate the noise of polarization on the LED signals by the polarization background subtraction. We further propose a pixel-free approach to correct the perspective distortion caused by the shift of view angle by adding polarizers around the liquid crystal array. We build a prototype of this hybrid modulation scheme using off-the-shelf optical components. Extensive experimental results demonstrate that the hybrid modulation scheme can achieve reliable communication, achieving 13.4 kbps throughput, which is 400 % of the existing state-of-the-art LED-Camera VLC.
Speaker Xiang Zou (Xi'an Jiaotong University)

My name is Xiang Zou, I am a PhD candidate at Xi’an Jiaotong University. I have been an exchange student in Zhejiang University since 2019. I'm interested in VLC,RFID and WiFi sensing.


Parallel Cross-technology Transmission from IEEE 802.11ax to Heterogeneous IoT Devices

Dan Xia, Xiaolong Zheng, Liang Liu and Huadong Ma (Beijing University of Posts and Telecommunications, China)

0
Cross-Technology Communication (CTC) enables direct interconnection among incompatible wireless technologies. However, for the downlink from WiFi to multiple IoT technologies, serial CTC transmission has extremely low spectrum efficiency. Recent parallel CTC uses IEEE 802.11g to send emulated ZigBee signals and let BLE receiver decodes its data from emulated ZigBee signals with a dedicated codebook. It still has low spectrum efficiency because 802.11g exclusively uses the whole channel. Besides, the codebook hinders the reception on commodity BLE devices. In this paper, we propose WiTx, a parallel CTC using IEEE 802.11ax to emulate a composite signal that can be received by commodity BLE, ZigBee, and LoRa devices. Thanks to OFDMA, WiTx uses a single Resource Unit (RU) for parallel CTC and sets other RUs free for high-rate WiFi users. But such a sophisticated composite signal is easily distorted by emulation imperfections, dynamic channel noises, cyclic prefix, and center frequency offset. We propose a CTC link model that jointly models emulation and channel distortions. Then we carve the emulated signal with elaborate compensations in both time and frequency domains. We implement a prototype of WiTx on USRP and commodity devices. Experiments demonstrate WiTx achieves an efficient parallel transmission with a goodput of 390.24kbps.
Speaker Dan Xia (Beijing University of Posts and Telecommunications)

I’m Dan Xia, a fourth-year Ph.D. student in School of Computer Science, Beijing University of Posts and Telecommunications. 


LigBee: Symbol-Level Cross-Technology Communication from LoRa to ZigBee

Zhe Wang and Linghe Kong (Shanghai Jiao Tong University, China); Longfei Shangguan (Microsoft Cloud&AI, USA); Liang He (University of Colorado Denver, USA); Kangjie Xu (Shanghai Jiao Tong University, China); Yifeng Cao (Georgia Institute of Technology, USA); Hui Yu (Shanghai Jiao Tong University, China); Qiao Xiang (Xiamen University, USA); Jiadi Yu (Shanghai Jiao Tong University, China); Teng Ma (Alibaba Group, China); Zhuo Song (Alibaba Cloud & Shanghai Jiao Tong University, China); Zheng Liu (Alibaba Group & Zhejiang University, China); Guihai Chen (Shanghai Jiao Tong University, China)

0
Low-power wide-area networks (LPWAN) evolve rapidly with advanced communication primitives (e.g., coding, modulation) being continuously invented. This rapid iteration on LPWAN, however, forms a communication barrier between legacy wireless sensor nodes deployed years ago (e.g., ZigBee-based sensor node) with their latest competitor running a different communication protocol (e.g., LoRa-based IoT node): they work on the same frequency band but share different MAC- and PHY-layer regulations and thus cannot talk to each other directly. To break this barrier, we propose (LigBee), a cross-technology communication (CTC) solution that enables symbol-level communication from the latest LPWAN LoRa node to legacy ZigBee node. We have implemented (LigBee) on both software-defined radios and commercial-off-the-shelf (COTS) LoRa and ZigBee nodes, and demonstrated that (LigBee) builds a reliable CTC link from LoRa node to ZigBee node on both platforms. Our experimental results show that (i)) (LigBee) achieves a bit error rate (BER) in the order of (10^{-3}) with (70\sim 80\%) frame reception ratio (FRR), (ii)) the range of (LigBee) link is over 300(m), which is (6\sim7.5 \times) the typical range of legacy ZigBee and state-of-the-art solution, and (iii)) the throughput of (LigBee) link is maintained on the order of kbps, which is close to the LoRa's throughput.
Speaker Yifeng Cao (Georgia Institute of Technology)

Yifeng Cao is currently a 4th year Ph.D. student at Georgia Institute of Technology. His research interest includes localization and ultra-wideband radio (UWB) based sensing. He is also interested in mobile computing and autonomous driving. Yifeng is now actively looking for a job in the industry. If you are interested in his research work, he is open to any discussion through email.


Session Chair

Zhangyu Guan

Session E-9

SDN

Conference
1:30 PM — 3:00 PM EDT
Local
May 19 Fri, 1:30 PM — 3:00 PM EDT
Location
Babbio 219

Nimble: Fast and Safe Migration of Network Functions

Sheng Liu (Microsoft, USA); Michael Reiter (Duke University, USA); Theophilus A. Benson (Brown University, USA)

0
Network function (NF) migration alongside (and possibly because of) routing policy updates is a delicate task, making it difficult to ensure that all traffic is processed by its required network functions, in order. Indeed, all previous solutions to this problem adapt routing policy only after NFs have been migrated, in a serial fashion. This paper proposes a design called Nimble for interleaving these tasks to complete both more efficiently while ensuring complete processing of traffic by the required NFs, provided that the route-update protocol enforces a specific property that we define. We demonstrate the benefits of the Nimble design using an implementation in Open vSwitch and the Ryu controller, building on both known routing update protocols and a new protocol of our design that implements specifically the needed property.
Speaker Michael Reiter (Duke University)

Michael Reiter is a James B. Duke Distinguished Professor in the Departments of Computer Science and Electrical & Computer Engineering at Duke University, which he joined in January 2021 following previous positions in industry (culminating as Director of Secure Systems Research at Bell Labs, Lucent) and academia (Professor of CS and ECE at Carnegie Mellon, and Distinguished Professor of CS at UNC-Chapel Hill). His technical contributions lie primarily in computer security and distributed computing. 


Efficient Verification of Timing-Related Network Functions in High-Speed Hardware

Tianqi Fang (University of Nebraska Lincoln, USA); Lisong Xu (University of Nebraska-Lincoln, USA); Witawas Srisa-an (University of Nebraska, USA)

0
To achieve line rate in the high-speed environment of modern networks, there is a continuing effort to offload network functions from software to programmable hardware (ProgHW). Although the offloading has shown greater performance, it brings up difficulty in the verification of timing-related network functions (T-NFs). T-NFs use numerical timing values to complete various network tasks. For example, a congestion control algorithm BBR uses round-trip time to improve throughput. Errors in T-NFs could cause packet loss and poor throughput. However, verifying T-NFs in ProgHW often involves many clock cycles that can result in an exponentially increasing number of test cases. Current verification methods either do not scale or sacrifice soundness for scalability.

In the paper, we propose an invariant-based method to improve the verification without losing soundness. Our method is motivated by an observation that most T-NFs follow a few fixed patterns to use timing information. Based on these patterns, we develop a set of efficient and easy-to-validate invariants to constrain the examination space. According to experiments on real T-NFs, our method can speed up verification by orders of magnitude without tampering the verification soundness.
Speaker Tianqi Fang (University of Nebraska-Lincoln)

I graduated in 2023 with a Ph.D. degree in computer science. I concentrate on formal verification and its application on FPGA-based Network Functions.


CURSOR: Configuration Update Synthesis Using Order Rules

Zibin Chen (University of Massachusetts, Amherst, USA); Lixin Gao (University of Massachusetts at Amherst, USA)

0
Configuration updates to networks are frequent nowadays to adapt to the rapid evolution of networks. To ensure the safety of the configuration update, network verification can be used to verify that network properties hold for the new configuration. However, configuration updates typically involve multiple routers changing their configurations. Changes on these routers can not be applied simultaneously. This leads to intermediate configurations, which might violate network properties such as reachability. Configuration updates synthesis aims to find an order of applying changes on routers such that network properties hold for all intermediate configurations.

Existing approaches synthesize a safe update order by traversing the update order space, which is time-consuming and does not scale to a large number of configuration updates. This paper proposes CURSOR, a configuration update synthesis that extracts rules update order should follow. We implement CURSOR and evaluate its performance on real-world configuration update scenarios. The experimental results show that we can accelerate the synthesis by an order of magnitude on large-scale configuration updates.
Speaker Zibin Chen (University of Massachusetts, Amherst)

Zibin Chen is a Ph.D. student currently pursuing his degree with the Department of Electrical and Computer Engineering at the University of Massachusetts, Amherst. He received his Master of Science degree from the same institution in 2021 after completing his Bachelor of Engineering degree from Shandong Normal University in China. His research area includes network management, software-defined network, inter-domain routing and network verification.


CaaS: Enabling Control-as-a-Service for Time-Sensitive Networking

Zheng Yang, Yi Zhao, Fan Dang, Xiaowu He, Jiahang Wu, Hao Cao and Zeyu Wang (Tsinghua University, China); Yunhao Liu (Tsinghua University & The Hong Kong University of Science and Technology, China)

2
Flexible manufacturing is one of the core goals of Industry 4.0 and brings new challenges to current industrial control systems. Our detailed field study on auto glass industry revealed that existing production lines are laborious to reconfigure, difficult to upscale, and costly to upgrade during production switching. Such inflexibility arises from the tight coupling of devices, controllers, and control tasks. In this work, we propose a new architecture for industrial control systems named Control-as-a-Service (CaaS). CaaS transfers and distributes control tasks from dedicated controllers into Time-Sensitive Networking (TSN) switches. By combining control and transmission functions in switches, CaaS virtualizes the whole industrial TSN network to one Programmable Logic Controller (PLC). We propose a set of techniques that realize end-to-end determinism for in-network industrial control and a joint task and traffic scheduling algorithm. We evaluate the performance of CaaS on testbeds based on real-world networked control systems. The results show that the idea of CaaS is feasible and effective, and CaaS achieves absolute packet delivery, 42-45% lower latency, and three orders of magnitude lower jitter. We believe CaaS is a meaningful step towards the distribution, virtualization, and servitization of industrial control.
Speaker Zeyu Wang (Tsinghua University)

Zeyu Wang is a PhD candidate in School of Software, Tsinghua University, under the supervision of Prof. Zheng Yang. He received his B.E. degree in School of Software from Tsinghua University in 2020. His research interests include Time-Sensitive Networking, edge computing, and Internet of Things.


Session Chair

Houbing H. Song

Session F-9

Memory/Cache Management 2

Conference
1:30 PM — 3:00 PM EDT
Local
May 19 Fri, 1:30 PM — 3:00 PM EDT
Location
Babbio 220

Two-level Graph Caching for Expediting Distributed GNN Training

Zhe Zhang, Ziyue Luo and Chuan Wu (The University of Hong Kong, Hong Kong)

0
Graph Neural Networks (GNNs) are increasingly popular due to excellent performance on learning graph-structured data in various domains. With fast expanding graph sizes and feature dimensions, distributed GNN training has been adopted, with multiple concurrent workers learning on different portions of a large graph. It has been observed that a main bottleneck in distributed GNN training lies in graph feature fetching across servers, which dominates the training time of each training iteration at each worker. This paper studies efficient feature caching on each worker to minimize feature fetching overhead, in order to expedite distributed GNN training. Current distributed GNN training systems largely adopt static caching of fixed neighbor nodes. We propose a novel two-level dynamic cache design exploiting both GPU memory and host memory at each worker, and design efficient two-level dynamic caching algorithms based on online optimization and a lookahead batching mechanism. Our dynamic caching algorithms consider node requesting probabilities and heterogeneous feature fetching costs from different servers, achieving an O(\log3 k) competitive ratio in terms of overall feature-fetching communication cost (where k is the cache capacity). We evaluate practical performance of our caching design with testbed experiments, and show that our design achieves up to 5.4x convergence speed-up.
Speaker Zhe Zhang (The University of Hong Kong)

Zhe Zhang is currently a Ph.D. candidate in the Department of Computer Science, The University of Hong Kong. She received her B.E. degree in 2019, from the Department of Computer Science and Technology, Zhejiang University. Her research interests include distributed machine learning algorithms and systems.


Galliot: Path Merging Based Betweenness Centrality Algorithm on GPU

Zheng Zhigao and Bo Du (Wuhan University, China)

1
Betweenness centrality (BC) is widely used to measure a vertex's significance by using the frequency of a vertex appearing in the shortest path between other vertices. However, most recent algorithms in BC computation suffer from the problem of high auxiliary memory consumption. To reduce BC computing's memory consumption, we propose a path-merging based algorithm called Galliot to calculate the BC values using GPU, which aims to minimize the on-broad memory consumption and enable the BC computation of large-scale graphs on GPU. The proposed algorithm requires O(n) space and run in O(mn) time on unweighted graphs. We present the theoretical principle for the proposed path merging method. Moreover, we propose a locality-oriented policy to maintain and update the worklist to improve GPU data locality. In addition, we conducted extensive experiments on NVIDIA GPUs to show the performance of Galliot. The results show that Galliot can process the larger graphs, which have 11.32× more vertices and 5.67× more edges than the graphs that recent works can process. Moreover, Galliot can achieve up to 38.77× speedup over the existing methods.
Speaker Xie Peichen

Xie Peichen is a postgraduate at Wuhan University and an undergraduate student at Xiamen University, focuses on high-performance computing and graph computing. He has published papers as co-author in INFOCOM 2023 and JSAC and both were accepted. He won the highest award of Outstanding Winner in the Mathematical Contest in Modeling in 2021.


Economic Analysis of Joint Mobile Edge Caching and Peer Content Sharing

Changkun Jiang (Shenzhen University, China)

0
Mobile edge caching (MEC) allows edge devices to cache popular contents and deliver to end-users directly, which can effectively alleviate increasing backbone loads and improve end-users' quality-of-service. Peer content sharing (PCS) enables edge devices to share cached contents with others, and can further increase caching efficiency. While many efforts have been made to content caching or sharing technologies, it remains open to study the complicated technical and economic interplay between both technologies. In this paper, we propose a joint MEC-PCS framework, and focus on capturing the strategic interactions between different types of edge devices. Specifically, we model their interactions as a non-cooperative game, where each device (player) can choose to be an agent (who caches and shares contents with others) or a requester (who doesn't cache but requests contents from others) of each content. We analyze the existence and uniqueness of the game equilibrium systematically under the generic usage-based pricing (for PCS). To address the incomplete information issue, we further design a behavior rule that allows players to achieve the equilibrium via a dynamic learning algorithm. Simulations show that the joint MEC-PCS framework can reduce the total system cost by 60%, compared with the benchmark pure caching system without PCS.
Speaker Changkun Jiang (Shenzhen University, China)

Changkun Jiang received his Ph.D. in Information Engineering from The Chinese University of Hong Kong in 2017. He is currently a faculty member in the College of Computer Science and Software Engineering at Shenzhen University, China. His research interests are primarily in artificial intelligence and economics for networked systems.


Enabling Switch Memory Management for Distributed Training with In-Network Aggregation

Bohan Zhao (Tsinghua University, China); Jianbo Dong and Zheng Cao (Alibaba Group, China); Wei Nie (Shenzhen University, unknown); Chang Liu (Shenzhen University, China); Wenfei Wu (Peking University, China)

0
Distributed training (DT) in shared clusters usually deploys a scheduler for resource allocation to multiple concurrent jobs. Meanwhile, a recent acceleration primitive, In-Network Aggregation (INA), introduces switch memory as a new critical resource for DT jobs, out of the prior scheduler's management. Lacking switch memory management leads to inefficient cluster resource usage. We build INAlloc, a switch memory management system for DT job schedulers to improve INA-empowered DT jobs in shared clusters. INAlloc adds a switch memory management layer to organize the physical switch memory, allocate memory to jobs, and provide friendly interfaces to schedulers. INAlloc incorporates switch memory into modeling a job's completion time (JCT) and its resources, which assists the scheduler in deciding the switch memory allocation. INAlloc overcomes the challenges of consistent and nondisruptive runtime switch memory reallocation. Our prototype and evaluation on real-world traces show that INAlloc can reduce the jobs' deadline miss ratio by 75% and JCT by 27%.
Speaker Bohan Zhao (Tsinghua University)

Bohan Zhao is a Ph.D. candidate at Tsinghua University. His research interests include programmable networks and the information infrastructure for distributed applications, such as machine learning, high-performance computing, and big data.


Session Chair

Gil Zussman

Session G-9

Cloud/Edge Computing 2

Conference
1:30 PM — 3:00 PM EDT
Local
May 19 Fri, 1:30 PM — 3:00 PM EDT
Location
Babbio 221

TanGo: A Cost Optimization Framework for Tenant Task Placement in Geo-distributed Clouds

Luyao Luo, Gongming Zhao and Hongli Xu (University of Science and Technology of China, China); Zhuolong Yu (Johns Hopkins University, USA); Liguang Xie (Futurewei Technologies, USA)

0
Cloud infrastructure has gradually displayed a tendency of geographical distribution in order to provide anywhere, anytime connectivity to tenants all over the world. The tenant task placement in geo-distributed clouds comes with three critical and coupled factors: regional diversity in electricity prices, access delay for tenants, and traffic demand among tasks. However, existing works disregard either the regional difference in electricity prices or the tenant requirements for tasks in geo-distributed clouds, resulting in increased operating costs or low user QoS.
To bridge the gap, we design a cost optimization framework for tenant task placement in geo-distributed clouds, called TanGo. However, it is non-trivial to achieve an optimization framework while meeting all the tenant requirements. To this end, we first formulate the electricity cost minimization for task placement problem as a constrained mixed-integer non-linear programming problem. We then propose a near-optimal algorithm with a tight approximation ratio (1-1/e) using an effective submodular-based method. Results of in-depth simulations based on real-world datasets show the effectiveness of our algorithm as well as the overall 10\%-30\% reduction in electricity expenses compared to commonly-adopted alternatives.
Speaker Zhenguo Ma (University of Science and Technology of China)

Zhenguo Ma received the B.S. degree in software engineering from the Shandong University, China, in 2018. He is currently pursuing his Ph.D. degree in the School of Computer Science and Technology, University of Science and Technology of China. His research interests include cloud computing, edge computing and federated learning.


An Approximation for Job Scheduling on Cloud with Synchronization and Slowdown Constraints

Dejun Kong and Zhongrui Zhang (Shanghai Jiao Tong University, China); Yangguang Shi (Shandong University, China); Xiaofeng Gao (Shanghai Jiao Tong University, China)

0
Cloud computing develops rapidly in recent years and provides service to many applications, in which job scheduling becomes more and more important to improve the quality of service. Parallel processing on cloud requires different machines starting simultaneously on the same job and brings processing slowdown due to communications overhead, defined as synchronization constraint and parallel slowdown. This paper investigates a new job scheduling problem of makespan minimization on uniform machines and identical machines with synchronization constraint and parallel slowdown. We first conduct complexity analysis proving that the problem is difficult in the face of adversarial job allocation. Then we propose a novel job scheduling algorithm, United Wrapping Scheduling (UWS), and prove that UWS admits an O(log m)-approximation for makespan minimization over m uniform machines. For the special case of identical machines, UWS is simplified to Sequential Allocation, Refilling and Immigration algorithm (SARI), proved to have a constant approximation ratio of 8 (tight up to a factor of 4). Performance evaluation implies that UWS and SARI have better makespan and realistic approximation ratio of 2 compared to baseline methods United-LPT and FIFO, and lower bounds.
Speaker Dejun Kong (Shanghai Jiao Tong University)

Dejun Kong is a Ph. D. candidate of Shanghai Jiao Tong University. His research area includes scheduling algorithm, distributed computing and data analytics.


Time and Cost-Efficient Cloud Data Transmission based on Serverless Computing Compression

Rong Gu and Xiaofei Chen (Nanjing University, China); Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China); Shulin Wang (Nanjing University, China); Zhaokang Wang and Yaofeng Tu (Nanjing University of Aeronautics and Astronautics, China); Yihua Huang (Nanjing University, China); Guihai Chen (Shanghai Jiao Tong University, China)

0
Nowadays, there exists a lot of cross-region data transmission demand on cloud. Serverless computing is a promising technique to compress data before transmission. However, it is challenging to estimate the data transmission time and monetary cost with serverless compression. In addition, minimizing the data transmission cost is non-trivial due to enormous parameter space. This paper focuses on this problem, and makes following contributions: (1) We propose an empirical data transmission time and monetary cost model based on serverless compression. (2) For single-task cloud data transmission, we propose two efficient parameter search methods based on Sequential Quadratic Programming and Eliminate then Divide and Conquer with error upper bounds. (3) Furthermore, a parameter search method based on dynamic programming and numerical computation is proposed to reduce the algorithm complexity from exponential to linear complexity for concurrent multi-task scenarios. We implement the actual system and evaluate it with various workloads and application cases on the real-world AWS cloud. Experimental results show that the proposed approach can improve parameter search efficiency by more than 3× compared with the state-of-art methods and achieves better parameter quality. Compared with other competing transmission approaches, our approach is able to achieve higher time efficiency and lower monetary cost.
Speaker Rong Gu (Nanjing University)

Rong Gu an assistant professor in the Department of Computer Science and Technology at Nanjing University. My research interests include Cloud and Big Data computing systems, efficient Cache/Index systems, Edge systems, etc. I have published over 40 papers in USENIX ATC, ICDE, WWW, INFOCOM, VLDBJ, IEEE TPDS, TNET, TMC, IPDPS, ICPP, IWQoS, DASFAA, and published a monograph. I received the IEEE TCSC Award for Excellence in Scalable Computing (Early Career), IEEE HPCC 2022 Best Paper Award (first author), the first prize of Jiangsu Science and Technology Prize in 2018, Tecent Cloud Valuable Professional (TVP) Award in 2021, the first place of the 30th SortBenchmark Competition CloudSort Track (Record Holder). My research results have been adopted by a number of well-known open source software such as Apache Spark, Alluxio, and leading IT/domain companies, including Alibaba, Baidu, Tencent, ByteDance, Huatai Securities, Intel, Sinopec, Weibo and so on. I am the community chair of the Fluid open source project (CNCF Sandbox project), a founding PMC member & maintainer of Alluxio (formly Tachyon) open source project. I am also the co-program chair of 15th IEEE iThings,the co-chair of 23rd ChinaSys, TPC member of SOSP’21/OSDI’22/USENIX ATC’22 Artifacts、AAAI’20、IEEE IPDPS’22.


Enabling Age-Aware Big Data Analytics in Serverless Edge Clouds

Zichuan Xu, Yuexin Fu and Qiufen Xia (Dalian University of Technology, China); Hao Li (China Coal Research Institute, China)

0
In this paper, we aim to fill the gap between serverless computing and mobile edge computing, via enabling query evaluations for big data analytics in short-lived functions of a serverless edge cloud (SEC). Specifically, we formulate novel age-aware big data query evaluation problems in a SEC so that the age of data is minimized, where the age of data is referred to the time difference between the finish time of analyzing a dataset and the generation time of the dataset. We propose approximation algorithms for the age-aware big data query evaluation problem with a single query, by proposing a novel parameterized virtualization technique that strives for a fine trade-off between short-lived functions and large resource demands of big data queries. We also devise an online learning algorithm with a bounded regret for the problem with multiple queries arriving dynamically and without prior knowledge of resource demands of the queries. We finally evaluate the performance of the proposed algorithms by extensive simulations. Simulation results show that the performance of our algorithms is promising.
Speaker Yuexin Fu

Yuexin Fu is a Master candidate at Dalian University of Technology. His research interests include edge computing and serverless computing.


Session Chair

Li Chen

Session Break-2-Day3

Coffee Break

Conference
3:00 PM — 3:30 PM EDT
Local
May 19 Fri, 3:00 PM — 3:30 PM EDT
Location
Babbio Lobby

Session A-10

Distributed Learning

Conference
3:30 PM — 5:00 PM EDT
Local
May 19 Fri, 3:30 PM — 5:00 PM EDT
Location
Babbio 122

DIAMOND: Taming Sample and Communication Complexities in Decentralized Bilevel Optimization

Peiwen Qiu, Yining Li and Zhuqing Liu (The Ohio State University, USA); Prashant Khanduri (University of Minnesota, USA); Jia Liu and Ness B. Shroff (The Ohio State University, USA); Elizabeth Serena Bentley (AFRL, USA); Kurt Turck (United States Air Force Research Labs, USA)

1
Decentralized bilevel optimization has received increasing attention recently due to its foundational role in many emerging multi-agent learning paradigms (e.g., multi-agent meta-learning and multi-agent reinforcement learning) over peer-to-peer edge networks. However, to work with the limited computation and communication capabilities of edge networks, a major challenge in developing decentralized bilevel optimization techniques is to lower sample and communication complexities. This motivates us to develop a new decentralized bilevel optimization called DIAMOND (decentralized single-timescale stochastic approximation with momentum and gradient-tracking). The contributions of this paper are as follows: i) our DIAMOND algorithm adopts a single-loop structure rather than following the natural double-loop structure of bilevel optimization, which offers low computation and implementation complexity; ii) compared to existing approaches, the DIAMOND algorithm does not require any full gradient evaluations, which further reduces both sample and computational complexities; iii) through a careful integration of momentum information and gradient tracking techniques, we show that the DIAMOND algorithm enjoys O(ε^(-3/2)) in sample and communication complexities for achieving an ε-stationary solution, both of which are independent of the dataset sizes and significantly outperform existing works. Extensive experiments also verify our theoretical findings.
Speaker Peiwen Qiu (The Ohio State University)

Peiwen Qiu is a Ph.D. student at The Ohio State University under the supervision of Prof. Jia (Kevin) Liu. Her research interests include but are not limited to optimization theory and algorithms for bilevel optimization, decentralized bilevel optimization and federated learning.


PipeMoE: Accelerating Mixture-of-Experts through Adaptive Pipelining

Shaohuai Shi (Harbin Institute of Technology, Shenzhen, China); Xinglin Pan and Xiaowen Chu (Hong Kong Baptist University, Hong Kong); Bo Li (Hong Kong University of Science and Technology, Hong Kong)

0
Large models have attracted much attention in the AI area. The sparsely activated mixture-of-experts (MoE) technique pushes the model size to a trillion-level with a sub-linear increase of computations as an MoE layer can be equipped with many separate experts, but only one or two experts need to be trained for each input data. However, the feature of dynamically activating experts of MoE introduces extensive communications in distributed training. In this work, we propose PipeMoE to adaptively pipeline the communications and computations in MoE to maximally hide the communication time. Specifically, we first identify the root reason why a higher pipeline degree does not always achieve better performance in training MoE models. Then we formulate an optimization problem that aims to minimize the training iteration time. To solve this problem, we build performance models for computation and communication tasks in MoE and develop an optimal solution to determine the pipeline degree such that the iteration time is minimal. We conduct extensive experiments with 174 typical MoE layers and two real-world NLP models on a 64-GPU cluster. Experimental results show that our PipeMoE almost always chooses the best pipeline degree and outperforms state-of-the-art MoE training systems by 5%-77% in training time.
Speaker Shaohuai Shi

Shaohuai Shi is currently an Assistant Professor at the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen. Previously, he was a Research Assistant Professor at the Department of Computer Science & Engineering of The Hong Kong University of Science and Technology. His current research focus is distributed machine learning systems.


Accelerating Distributed K-FAC with Efficient Collective Communication and Scheduling

Lin Zhang (Hong Kong University of Science and Technology, Hong Kong); Shaohuai Shi (Harbin Institute of Technology, Shenzhen, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong)

0
Kronecker-factored approximate curvature (K-FAC) has been shown to achieve faster convergence than SGD in training deep neural networks. However, existing distributed K-FAC (D-KFAC) relies on the all-reduce collective for communications and scheduling, which incurs excessive communications in each iteration. In this work, we propose a new form of D-KFAC with a reduce-based alternative to eliminate redundant communications. This poses new challenges and opportunities in that the reduce collective requires a root worker to collect the results, which considerably complicates the communication scheduling. To this end, we formulate an optimization problem that determines tensor fusion and tensor placement simultaneously aiming to minimize the training iteration time. We develop novel communication scheduling strategies and propose a placement-aware D-KFAC (PAD-KFAC) training algorithm, which further improves communication efficiency. Our experimental results on a 64-GPU cluster interconnected with 10Gb/s and 100Gb/s Ethernet show that our PAD-KFAC can achieve an average of 27% and 17% improvement over state-of-the-art D-KFAC methods, respectively.
Speaker Lin Zhang (Hong Kong University of Science and Technology)

Lin Zhang is currently pursuing the Ph.D. degree in the Department of Computer Science and Engineering at the Hong Kong University of Science and Technology. His research interests include machine learning systems and algorithms, with a special focus on distributed DNNs training, and second-order optimization methods.


DAGC: Data-aware Adaptive Gradient Compression

Rongwei Lu (Tsinghua University, China); Jiajun Song (Dalian University of Technology, China); Bin Chen (Harbin Institute of Technology, Shenzhen, China); Laizhong Cui (Shenzhen University, China); Zhi Wang (Tsinghua University, China)

1
Gradient compression algorithms are widely used to alleviate the communication bottleneck in distributed ML. However, existing gradient compression algorithms suffer from accuracy degradation in Non-IID scenarios, because a uniform compression scheme is used to compress gradients at workers with different data distributions and volumes, since workers with larger volumes of data are forced to adapt to the same aggressive compression ratios as others. Assigning different compression ratios to workers with different data distributions and volumes is thus a promising solution.
In this study, we first derive a function from capturing the correlation between the number of training iterations for a model to converge to the same accuracy, and the compression ratios at different workers; This function particularly shows that workers with larger data volumes should be assigned with higher compression ratios to guarantee better accuracy. Then, we formulate the assignment of compression ratios to the workers as an n-variables chi-square nonlinear optimization problem under fixed and limited total communication constrain. We propose an adaptive gradients compression strategy called DAGC, which assigns each worker a different compression ratio according to their data volumes. Our experiments confirm that DAGC can achieve better performance facing highly imbalanced data volume distribution and restricted communication.
Speaker Rongwei Lu (Tsinghua University)

Rongwei is a second-year Master's student in Computer Technology at Tsinghua University, advised by Prof. Zhi Wang. His research interests are to accelerating machine learning training from communication and computation. He was a research intern in System Research Group of MSRA. This paper is his first published paper. 


Session Chair

Yanjiao Chen

Session B-10

Learning

Conference
3:30 PM — 5:00 PM EDT
Local
May 19 Fri, 3:30 PM — 5:00 PM EDT
Location
Babbio 104

Learning to Schedule Tasks with Deadline and Throughput Constraints

Qingsong Liu and Zhixuan Fang (Tsinghua University, China)

0
We consider the task scheduling scenario where the controller activates one from $K$ task types at each time. Each task induces a random completion time, and a reward is obtained only after the task is completed. The statistics of the completion time and the reward distributions of all task types are unknown to the controller. The controller needs to learn to schedule tasks to maximize the accumulated reward within a given time horizon T. Motivated by the practical scenarios, we require the designed policy to satisfy a system throughput constraint. In addition, we introduce the interruption mechanism to terminate ongoing tasks that last longer than certain deadlines. To address this scheduling problem, we model it as an online learning problem with deadline and throughput constraints. Then, we characterize the optimal offline policy and develop efficient online learning algorithms based on the Lyapunov method. We prove that our online learning algorithm achieves an \(\sqrt{T}\) regret and zero constraint violations. We also conduct simulations to evaluate the performance of our developed learning algorithms.
Speaker Qingsong Liu (Tsinghua University)

Qingsing Liu received the B.Eng. degree in electronic engineering from Tsinghua University, China. Now he is

currently pursuing the Ph.D. degree with the Institute for Interdisciplinary Information Sciences (IIIS) of Tsinghua University. His research interests include online learning, and networked and computer systems modeling and optimization. He has published several papers in IEEE Globecom, IEEE ICASSP, IEEE WiOpt, IEEE INFOCOM, ACM/IFIP Performance, and NeurIPS.


A New Framework: Short-Term and Long-Term Returns in Stochastic Multi-Armed Bandit

Abdalaziz Sawwan and Jie Wu (Temple University, USA)

0
Stochastic Multi-Armed Bandit (MAB) has recently been studied widely due to its vast range of applications. The classical model considers the reward of a pulled arm to be observed after a time delay that is sampled from a random distribution assigned for each arm. In this paper, we propose an extended framework in which pulling an arm gives both an instant (short-term) reward and a delayed (long-term) reward at the same time. The distributions of reward values for short-term and long-term rewards are related with a previously known relationship. The distribution of time delay for an arm is independent of the reward distributions of the arm. In our work, we devise three UCB-based algorithms, where two of them are near-optimal-regret algorithms for this new model, with the corresponding regret analysis for each one of them. Additionally, the random distributions for time delay values are allowed to yield infinite time, which corresponds to a case where the arm only gives a short-term reward. Finally, we evaluate our algorithms and compare this paradigm with previously known models on both a synthetic data set and a real data set that would reflect one of the potential applications of this model.
Speaker Abdalaziz Sawwan (Temple University)

Abdalaziz Sawwan is a third-year Ph.D. student in Computer and Information Sciences at Temple University. He is a Research Assistant at the Center for Networked Computing. Sawwan received his bachelor’s degree in Electrical Engineering from the University of Jordan in 2020. His current research interests include multi-armed bandits, communication networks, mobile charging, and wireless networks.


DeepScheduler: Enabling Flow-Aware Scheduling in Time-Sensitive Networking

Xiaowu He, Xiangwen Zhuge, Fan Dang, Wang Xu and Zheng Yang (Tsinghua University, China)

2
Time-Sensitive Networking (TSN) has been considered the most promising network paradigm for time-critical applications (e.g., industrial control) and traffic scheduling is the core of TSN to ensure low latency and determinism. With the demand for flexible production increases, industrial network topologies and settings change frequently due to pipeline switches. As a result, there is a pressing need for a more efficient TSN scheduling algorithm. In this paper, we propose DeepScheduler, a fast and scalable flow-aware TSN scheduler based on deep reinforcement learning. In contrast to prior work that heavily relies on expert knowledge or problem-specific assumptions, DeepScheduler automatically learns effective scheduling policies from the complex dependency among data flows. We design a scalable neural network architecture that can process arbitrary network topologies with informative representations of the problem, and decompose the problem decision space for efficient model training. In addition, we develop a suite of TSN-compatible testbeds with hardware-software co-design and DeepScheduler integration. Extensive experiments on both simulation and physical testbeds show that DeepScheduler runs >150/5 times faster and improves the schedulability by 36%/39% compared to state-of-the-art heuristic/expert-based methods. With both efficiency and effectiveness, DeepScheduler makes scheduling no longer an obstacle towards flexible manufacturing.
Speaker Xiaowu He (Tsinghua university)

Xiaowu He is a PhD candidate in School of Software, Tsinghua University, under the supervision of Prof. Zheng Yang. He received his B.E. degree in School of Computer Science and Engineering from University of Electronic Science and Technology of China in 2019. His research interests include Time-Sensitive Networking, edge computing, and Internet of Things.


The Power of Age-based Reward in Fresh Information Acquisition

Zhiyuan Wang, Qingkai Meng, Shan Zhang and Hongbin Luo (Beihang University, China)

0
Many Internet platforms collect fresh information of various points of interest (PoIs) relying on users who happen to be nearby the PoIs. The platform will offer reward to incentivize users and compensate their costs incurred from information acquisition. In practice, the user cost (and its distribution) is hidden to the platform, thus it is challenging to determine the optimal reward. In this paper, we investigate how the platform dynamically rewards the users, aiming to jointly reduce the age of information (AoI) and the operational expenditure (OpEx). Due to the hidden cost distribution, this is an online non-convex learning problem with partial feedback. To overcome the challenge, we first design an age-based rewarding scheme, which decouples the OpEx from the unknown cost distribution and enables the platform to accurately control its OpEx. We then take advantage of the age-based rewarding scheme and propose an exponentially discretizing and leaning (EDAL) policy for platform operation. We prove that the EDAL policy performs asymptotically as well as the optimal decision (derived based on the cost distribution). Simulation results show that the age-based rewarding scheme protects the platform's OpEx from the influence of the user characteristics, and verify the asymptotic optimality of the EDAL policy.
Speaker Zhiyuan Wang (Beihang University)

Zhiyuan Wang is an associated professor with the School of Computer Science and Engneering in Beihang University. He received the PhD from The Chinese University and Hong Kong (CUHK), in 2019. His research interest includes network economics, game theory, and online learning.


Session Chair

Saptarshi Debroy

Session C-10

Security and Trust

Conference
3:30 PM — 5:00 PM EDT
Local
May 19 Fri, 3:30 PM — 5:00 PM EDT
Location
Babbio 202

Mind Your Heart: Stealthy Backdoor Attack on Dynamic Deep Neural Network in Edge Computing

Tian Dong (Shanghai Jiao Tong University, China); Ziyuan Zhang (Beijing University of Posts and Telecommunications, China); Han Qiu (Tsinghua University, China); Tianwei Zhang (Nanyang Technological University, Singapore); Hewu Li (Tsinghua University, China); Terry Wang (Alibaba, China)

0
Transforming off-the-shelf deep neural network (DNN) models into dynamic multi-exit architectures can achieve inference and transmission efficiency by fragmenting and distributing a large DNN model in edge computing scenarios (e.g. edge devices and cloud servers). In this paper, we propose a novel backdoor attack specifically on the dynamic multi-exit DNN models. Particularly, we inject a backdoor by poisoning one DNN model's shallow hidden layers targeting not this vanilla DNN model but only at its dynamically deployed multi-exit architectures. Our backdoored vanilla model behaves normally on performance and cannot be activated even with the correct trigger. However, the backdoor will be activated when the victims acquire this model and transform it into a dynamic multi-exit architecture at their deployment. We conduct extensive experiments to prove the effectiveness of our attack on three structures (ResNet-56, VGG-16, and MobileNet) with four datasets (CIFAR-10, SVHN, GTSRB, and Tiny-ImageNet) and our backdoor is stealthy to evade multiple state-of-the-art backdoor detection or removal methods.
Speaker Ziyuan Zhang (Beijing University of Posts and Telecommunications)

Ziyuan Zhang is a senior student from Beijing University of Posts and Telecommunications. Her current research focuses on edge computing security issues.


A Comprehensive and Long-term Evaluation of Tor V3 Onion Services

Chunmian Wang, Luo Junzhou and Zhen Ling (Southeast University, China); Lan Luo (University of Central Florida, USA); Xinwen Fu (University of Massachusetts Lowell, USA)

0
To enhance the privacy of Tor onion services, the new generation onion service protocol, i.e., version 3 (V3), is deployed to deeply hide the domain names of onion services. However, existing onion service analysis methods cannot be used any more to understand V3 onion services. We address the issue in this paper. To understand the scale of V3 onion services, we theoretically analyze the V3 onion service mechanism and propose an accurate onion service size estimation method, which is able to achieve an estimation deviation of 2.43\% on a large-scale Tor emulation network. To understand onion website popularity, we build a system and collect more than two years of data of public onion websites. We develop an onion service popularity estimation algorithm using online rate and access rate to rank the onion services. To reduce the noise from the phishing websites, we cluster onion websites into groups based on the content and structure. To our surprise, we only find 487 core websites out of the collected 45,889 public onion websites. We further analyze the weighted popularity using yellow page data within each group and discover that 35,331 phishing onion websites spoof the 487 core websites.
Speaker Chunmian Wang(Southeast University, China)



DTrust: Toward Dynamic Trust Levels Assessment in Time-Varying Online Social Networks

Jie Wen (East China Jiaotong University, China & University of South China, China); Nan Jiang (East China Jiaotong University, China); Jin Li (Guangzhou University, China); Ximeng Liu (Fuzhou University, China); Honglong Chen (China University of Petroleum, China); Yanzhi Ren (University of Electronic Science and Technology of China, China); Zhaohui Yuan and Ziang Tu (East China Jiaotong University, China)

0
The social trust assessment can spur extensive applications but remain a challenging problem having limited exploration. Such explorations mainly limit their studies to the static network topology or simplified dynamic network, toward the social trust relationship prediction. We explore the social trust by taking into account time-varying online social networks(OSNs) whereas the social trust relationship may vary over time. The DTrust, a dynamic graph attention-based solution, will be proposed for accurate social trust prediction. DTrust is composed of a static aggregation unit and a dynamic unit, respectively responsible for capturing both the spatial dependence features and temporal dependence features. In the former unit, we stack multiple NNConv layers derived from the edge-conditioned convolution network for capturing the spatial dependence features correlated to the network topology and the observed social relationships. In the latter unit, a gated recurrent unit (GRU) will be employed for learning the evolution law of social interaction and social trust relationships. Based on the extracted spatial and temporal features, we then employ the neural networks for learning, able to predict the social trust relationships for both current and future time slots. Extensive experimental results exhibit that our DTrust can outperform the benchmark counterparts on two real-world datasets.
Speaker Jie Wen(East China Jiaotong University)

Jie Wen was born in 1983. He received the M.Sc. degree (2008) in Control Science and Engineering

from Central South University. He is currently pursuing the Ph.D. degree in control systems at East China Jiaotong University. His current research interests focus on Industrial Internet of Things, Age of Information, Graph Neural Network and Social Network.


SDN Application Backdoor: Disrupting the Service via Poisoning the Topology

Shuhua Deng, Xian Qing and Xiaofan Li (Xiangtan University, China); Xing Gao (University of Delaware, USA); Xieping Gao (Hunan Normal University, China)

0
Software-Defined Networking (SDN) enables the deployment of diversified networking applications by providing global visibility and open programmability on a centralized controller. As SDN enters its second decade, several well-developed open source controllers have been widely adopted in industry, and various commercial SDN applications are built to meet the surging demand of network innovation. This complex ecosystem inevitably introduces new security threats, as malicious applications can significantly disrupt network operations. In this paper, we introduce a new vulnerability in existing SDN controllers that enable adversaries to create a backdoor and further deploy malicious applications to disrupt network service via a series of topology poisoning attacks. The root cause of this vulnerability is that SDN systems simply process received Packet-In messages without checking the integrity, and thus can be mis-guided by manipulated messages. We discover that five popular SDN controllers (i.e., Floodlight, ONOS, OpenDaylight, POX and Ryu) are potentially vulnerable to the disclosed attack, and further propose six new attacks exploiting this vulnerability to disrupt SDN services from different layers. We evaluate the effectiveness of these attacks with experiments in real SDN testbeds, and discuss feasible countermeasures.
Speaker Xian Qing (Xiangtan University)

Xian Qing is a postgraduate student from Xiangtan University. Her current research focuses on software-defined networking.


Session Chair

Yang Xiao

Session D-10

Edge Computing 3

Conference
3:30 PM — 5:00 PM EDT
Local
May 19 Fri, 3:30 PM — 5:00 PM EDT
Location
Babbio 210

Prophet: An Efficient Feature Indexing Mechanism for Similarity Data Sharing at Network Edge

Yuchen Sun, Deke Guo, Lailong Luo, Li Liu, Xinyi Li and Junjie Xie (National University of Defense Technology, China)

0
As an emerging infrastructure, edge storage systems have attracted many efforts to efficiently distribute and share data among edge servers. However, it remains open to meeting the increasing demand of similarity data sharing. The intrinsic reason is that, the existing solutions can only return an exact data match for a query while more general edge applications require the data similar to a query input from any server. To fill this gap, this paper pioneers a new paradigm to support high-dimensional similarity search at network edges. Specifically, we propose Prophet, the first known architecture for similarity data indexing. We first divide the feature space of data into plenty of subareas, then project both subareas and edge servers into a virtual plane where the distances between any two points can reflect not only data similarity but also network latency. When any edge server submits a request for data insert, delete, or query, it computes the data feature and the virtual coordinates; then iteratively forwards the request through greedy routing based on the forwarding tables and the virtual coordinates. By Prophet, similar high-dimensional features would be stored by a common server or several nearby servers.
Speaker Yuchen Sun (National University of Defense Technology)

Yuchen Sun received a B.S. in Telecommunication Engineering from the Huazhong University of Science and Technology, Wuhan, China, in 2018. He has been with the School of System Engineering, National University of Defense Technology, Changsha, where he is currently a PhD candidate. His research interests include Trustworthy Artificial Intelligence, Edge Computing and Wireless Indoor Localization.


DeepFT: Fault-Tolerant Edge Computing using a Self-Supervised Deep Surrogate Model

Shreshth Tuli and Giuliano Casale (Imperial College London, United Kingdom (Great Britain)); Ludmila Cherkasova (ARM Research, USA); Nicholas Jennings (Loughborough University, United Kingdom (Great Britain))

2
The emergence of latency-critical AI applications has been supported by the evolution of the edge computing paradigm. However, edge solutions are typically resource-constrained, posing reliability challenges due to heightened contention for compute capacities and faulty application behavior in the presence of overload conditions. Although a large amount of generated log data can be mined for fault prediction, labeling this data for training is a manual process and thus a limiting factor for automation. Due to this, many companies resort to unsupervised fault-tolerance models. Yet, failure models of this kind can incur a loss of accuracy when they need to adapt to non-stationary workloads and diverse host characteristics. Thus, we propose a novel modeling approach, DeepFT, to proactively avoid system overloads and their adverse effects by optimizing the task scheduling decisions. DeepFT uses a deep-surrogate model to accurately predict and diagnose faults in the system and co-simulation based self-supervised learning to dynamically adapt the model in volatile settings. Experimentation on an edge cluster shows that DeepFT can outperform state-of-the-art methods in fault-detection and QoS metrics. Specifically, DeepFT gives the highest F1 scores for fault-detection, reducing service deadline violations by up to 37% while also improving response time by up to 9%.
Speaker Shreshth Tuli

Shreshth Tuli is a President's Ph.D. Scholar at the Department of Computing, Imperial College London, UK. Prior to this he was an undergraduate student at the Department of Computer Science and Engineering at Indian Institute of Technology - Delhi, India. He has worked as a visiting research fellow at the CLOUDS Laboratory, School of Computing and Information Systems, the University of Melbourne, Australia. He is a national level Kishore Vaigyanik Protsahan Yojana (KVPY) scholarship holder from the Government of India for excellence in science and innovation. His research interests include Internet of Things (IoT), Fog Computing and Deep Learning.


Marginal Value-Based Edge Resource Pricing and Allocation for Deadline-Sensitive Tasks

Puwei Wang and Zhouxing Sun (Renmin University of China, China); Ying Zhan (Guizhou University of Finance and Economics, China); Haoran Li and Xiaoyong Du (Renmin University of China, China)

0
In edge computing (EC), the resource allocation is to allocate computing, storage and networking resources on the edge nodes (ENs) efficiently and reasonably to tasks generated by users. Due to the resource-limitation of ENs, the tasks often need to compete for the resources. Pricing mechanisms are widely used to deal with the resource allocation problem, and the valuations of tasks play a critical role in the price mechanisms. However, the users naturally are not willing to expose the valuations of their tasks. In this paper, we introduce the marginal value to estimate the valuations of tasks, and propose a marginal value-based pricing mechanism using the incentive theory, which gives the incentives to the tasks with higher marginal values to actively request the more resources. The EC platform first sets the resource prices, and then the users determine the resource requests relying on the resource prices and the valuations of their tasks. After receiving the deadline-sensitive tasks from the users, the resource allocation can be modeled as a 0-1 knapsack problem with the deadline constraints of tasks. Extensive experimental results demonstrate that our approach is computationally efficient and is promising in enhancing the utility of the EC platform and the tasks.
Speaker Puwei Wang

Puwei Wang is an associate professor in School Information, Renmin University of China. His current research is on blockchain, service computing and edge computing.


Digital Twin-Enabled Service Satisfaction Enhancement in Edge Computing

Jing Li (The Hong Kong Polytechnic University, Hong Kong); Jianping Wang (City University of Hong Kong, Hong Kong); Quan Chen (Guangdong University of Technology, China); Yuchen Li (The Australian National University, Australia); Albert Zomaya (The University of Sydney, Australia)

0
The emerging digital twin technique enhances the network management efficiency and provides comprehensive insights, through mapping physical objects to their digital twins. The user satisfaction on digital twin-enabled query services relies on the freshness of digital twin data, which is measured by the Age of Information (AoI). Mobile Edge Computing (MEC), as a promising technology, offers real-time data communication between physical objects and their digital twins at the edge of the core network. However, the mobility of physical objects and dynamic query arrivals in MEC networks make efficient service provisioning in MEC become challenging. In this paper, we investigate the dynamic digital twin placement for augmenting user service satisfaction in MEC. We focus on two user service satisfaction augmentation problems in an MEC network, i.e., the static and dynamic utility maximization problems. We first formulate an Integer Linear Programming (ILP) solution and a performance-guaranteed approximation algorithm for the static utility maximization problem. We then devise an online algorithm for the dynamic utility maximization problem with a provable competitive ratio. Finally, we evaluate the performance of the proposed algorithms through experimental simulations. Simulation results demonstrate that the proposed algorithms are promising.
Speaker Jing Li (The Hong Kong Polytechnic University)

Jing Li received the PhD degree and the BSc degree with the first class Honours from The Australian National University. He is currently a postdoctoral fellow at The Hong Kong Polytechnic University. His research interests include edge computing, internet of things, digital twin, network function virtualization, and combinatorial optimization.


Session Chair

Hao Wang

Session E-10

Video Streaming 4

Conference
3:30 PM — 5:00 PM EDT
Local
May 19 Fri, 3:30 PM — 5:00 PM EDT
Location
Babbio 219

Collaborative Streaming and Super Resolution Adaptation for Mobile Immersive Videos

Lei Zhang (Shenzhen University, China); Haotian Guo (ShenZhen University, China); Yanjie Dong (Shenzhen University, China); Fangxin Wang (The Chinese University of Hong Kong, Shenzhen, China); Laizhong Cui (Shenzhen University, China); Victor C.M. Leung (Shenzhen University, China & The University of British Columbia, Canada)

0
Tile-based streaming and super resolution are two representative technologies adopted to improve bandwidth efficiency for immersive video steaming. The former allows the selective download for the contents in the user viewport by splitting the video into multiple independently decodable tiles. The latter leverages client-side computation to reconstruct the received video into higher quality using the advanced neural network models. In this work, we propose CASE, a collaborated adaptive streaming and enhancement framework for mobile immersive videos, which integrates super resolution with tile-based streaming to optimize user experience with dynamic bandwidth and limited computing capability. To coordinate the video transmission and reconstruction in CASE, we identify and address several key design issues including unified video quality assessment, computation complexity model for super resolution, and buffer analysis considering the interplay between transmission and reconstruction. We further formulate the quality-of-experience (QoE) maximization problem for mobile immersive video streaming and propose a rate adaptation algorithm to make the best decisions for download and for reconstruction based on the Lyapunov optimization theory. The extensive evaluation results validate the superiority of our proposed approach, which presents stable performance with considerable QoE improvement and is able to adjust the trade-off between playback smoothness and video quality.
Speaker Haotian Guo



EAVS: Edge-assisted Adaptive Video Streaming with Fine-grained Serverless Pipelines

Biao Hou and Song Yang (Beijing Institute of Technology, China); Fernando A. Kuipers (Delft University of Technology, The Netherlands); Lei Jiao (University of Oregon, USA); Xiaoming Fu (University of Goettingen, Germany)

0
Recent years have witnessed video streaming gradually evolves into one of the most popular Internet applications. With the rapidly growing personalized demand for real-time video streaming services, maximizing the Quality of Experience (QoE) for video streaming is a long-standing challenge. The emergence of serverless computing paradigm has potential to meet this challenge through its fine-grained management and highly parallel computing structures. However, it is still ambiguous how to implement and configure serverless components to optimize video streaming services. In this paper, we propose EAVS, an Edge-assisted Adaptive Video streaming system with Serverless pipelines, which facilitates fine-grained management for multiple concurrent video transmission pipelines. Then, we design a chunk-level optimization scheme to solve the video bitrate adaptation issue. We propose a Deep Reinforcement Learning (DRL) algorithm based on Proximal Policy Optimization (PPO) with a trinal-clip mechanism to make bitrate decisions efficiently for better user-perceived QoE. Finally, we implement the serverless video streaming system prototype and evaluate the performance of EAVS on various real-world network traces. Extensive results show that our proposed EAVS significantly improves user-perceived QoE and reduces the stall rate, achieving over 9.1% QoE improvement and 60.2% latency decrease compared to state-of-the-art solutions.
Speaker Biao Hou (Beijing Institute of Technology)

Biao Hou received the B.S. degree in computer science and the M.S. degree in computer science from the Inner Mongolia University, China, in 2018 and 2021, respectively. He is currently the Ph.D. student with the School of Computer Science and Technology, Beijing Institute of Technology. His research interests include edge computing and video streaming delivery.


SJA: Server-driven Joint Adaptation of Loss and Bitrate for Multi-Party Realtime Video Streaming

Kai Shen, Dayou Zhang and Zi Zhu (The Chinese University of Hong Kong Shenzhen, China); Lei Zhang (Shenzhen University, China); Fangxin Wang (The Chinese University of Hong Kong, Shenzhen, China); Dan Wang (The Hong Kong Polytechnic University, Hong Kong)

0
The outbreak of COVID-19 has dramatically promoted the explosive proliferation of multi-party realtime video streaming (MRVS) services, represented by Zoom and Microsoft Teams. Different from Video-on-Demand (VoD) or live streaming, MRVS enables all-to-all realtime video communication, bringing great challenges to service providing. First, the unreliable network transmission can cause network loss, resulting in delay increase and visual quality degradation. Second, the transformation from two-party to multi-party communication makes resource scheduling much more difficult. Moreover, optimizing the overall QoE requires a global coordination, which is quite challenging given the various impact factors such as loss, bitrate, network conditions, etc.

In this paper, we propose the SJA framework, which is, to our best knowledge, the first server-driven joint loss and bitrate adaptation framework in multi-party realtime video streaming services towards maximized QoE. We comprehensively design an appropriate QoE model for MRVS services to capture the interplay among perceptual quality, variations, bitrate mismatch, loss damage, and streaming delay. We mathematically formulate the QoE maximization problem in MRVS services. A Lyapunov-based algorithm and the SJA algorithm is further designed to address the optimization problem with close-to-optimal performance. Evaluations show that our framework can outperform the SOTA solutions by 18.4% ~ 46.5%.
Speaker Dayou Zhang



ResMap: Exploiting Sparse Residual Feature Map for Accelerating Cross-Edge Video Analytics

Ning Chen, Shuai Zhang, Sheng Zhang, Yuting Yan, Yu Chen and Sanglu Lu (Nanjing University, China)

1
Deploying deep convolutional neural network (CNN) to perform video analytics at edge poses a substantial system challenge, as running CNN inference incurs a prohibitive cost in computational resources. Model partitioning, as a promising approach, splits CNNs and distributes them to multiple edge devices in closer proximity to each other for serial inferences, however, it causes considerable cross-edge delay for transmitting intermediate feature maps. To overcome this challenge, we present ResMap, a new edge video analytics framework that significantly improves the cross-edge transmission and flexibly partitions the CNNs. Briefly, by exploiting the sparsity of the intermediate raw or residual feature map, ResMap effectively removes the redundant transmission, thereby decreasing the cross-edge transmission delay. In addition, ResMap incorporates an Online Data-Aware Scheduler to regularly update the CNN partitioning scheme so as to adapt to the time-varying edge runtime and video content. We have implemented ResMap fully based on COTS hardware, and the experimental results show that ResMap reduces the intermediate feature map volume by 14.93-46.12% and improves the average processing time by 17.43-30.6% compared to other alternative designs.
Speaker Ning Chen (Nanjing University)

I am a Ph.D. student in Department of Computer Science and Technology at Nanjing University advised by Associate Professor Sheng Zhang. My research interests are broadly in edge intelligence. More specifically, I focus on two different directions.

AI for Edge: Using ML algorithms (e.g., reinforcement learning) to solve the potential edge‑oriented problems, e.g., resource allocation and request scheduling (TPDS 2020, CN 2021, ICPADS 2019);

Edge for AI: Applying edge computing paradigm to advance the AI applications (e.g., video analytics, video streaming enhancement and federal learning) (INFOCOM 23, CN 21).

In recent two years, I’ve worked on AI/ML‑oriented video system optimization.


Session Chair

Jun ZHAO

Session F-10

Federated Learning 6

Conference
3:30 PM — 5:00 PM EDT
Local
May 19 Fri, 3:30 PM — 5:00 PM EDT
Location
Babbio 220

A Reinforcement Learning Approach for Minimizing Job Completion Time in Clustered Federated Learning

Ruiting Zhou (Southeast University, China); Jieling Yu and Ruobei Wang (Wuhan University, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong); Jiacheng Jiang and Libing Wu (Wuhan University, China)

0
Federated Learning (FL) enables potentially a large number of clients to collaboratively train a global model with the coordination of a central cloud server without exposing client raw data. However, the FL model convergence performance, often measured by the job completion time, is hindered by two critical factors: non independent and identically distributed (non-IID) data across clients and the straggler effect. In this work, we propose a clustered FL framework, MCFL, to minimize the job completion time by mitigating the influence of non-IID data and the straggler effect while guaranteeing the FL model convergence performance. MCFL builds upon a two-stage operation: i) a clustering algorithm constructs clusters, each containing clients with similar computing and communications capabilities to combat the straggler effect within a cluster; ii) a deep reinforcement learning (DRL) algorithm based on soft actor-critic with discrete actions intelligently selects a subset of clients from each cluster to mitigate the impact of non-IID data, and derives the number of intra-cluster aggregation iterations for each cluster to reduce the straggler effect among clusters. Experiments are conducted under various configurations to verify the efficacy of MCFL. The results show that MCFL can reduce the job completion time by up to 70%.
Speaker Jieling Yu(Wuhan University)

Jieling Yu received the BE degree from the School of Cyber Science and Engineering, Wuhan University, China, in 2021. She is currently working toward the master’s degree from the School of Cyber Science and Engineering, Wuhan University, China. Her research interests include edge computing, federated learning, online learning and network optimization.



AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous Edge Devices

Peichun Li (Guangdong University of Technology, China & University of Macau, Macao); Guoliang Cheng and Xumin Huang (Guangdong University of Technology, China); Jiawen Kang (Nanyang Technological University, Singapore); Rong Yu (Guangdong University of Technology, China); Yuan Wu (University of Macau, Macao); Miao Pan (University of Houston, USA)

1
In this work, we investigate the challenging problem of on-demand federated learning (FL) over heterogeneous edge devices with diverse resource constraints. We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates under a wide range of efficiency constraints. To this end, we design the model shrinking to support local model training with elastic computation cost, and the gradient compression to allow parameter transmission with dynamic communication overhead. An enhanced parameter aggregation is conducted in an element-wise manner to improve the model performance. Focusing on AnycostFL, we further propose an optimization design to minimize the global training loss with personalized latency and energy constraints.
By revealing the theoretical insights of the convergence analysis, personalized training strategies are deduced for different devices to match their locally available resources. Experiment results indicate that, when compared to the state-of-the-art efficient FL algorithms, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy. Moreover, the results also demonstrate that, our approach significantly improves the converged global accuracy.
Speaker Peichun Li (University of Macau)

Peichun Li received his M.S. degree from the Guangdong University of Technology. He is currently a research assistant at the University of Macau. His research interests include edge computing and deep learning, particularly in efficient algorithms for artificial intelligence applications.


AOCC-FL: Federated Learning with Aligned Overlapping via Calibrated Compensation

Haozhao Wang (Huazhong University of Science and Technology, China); Wenchao Xu and Yunfeng Fan (The Hong Kong Polytechnic University, China); Ruixuan Li (Huazhong University of Science and Technology, China); Pan Zhou (School of CSE, Huazhong University of Science and Technology, China)

1
Federated Learning enables collaboratively model training among a number of distributed devices with the coordination of a centralized server, where each device alternatively performs local gradient computation and communication to the server. FL suffers from significant performance degradation due to the excessive communication delay between the server and devices, especially when the network bandwidth of these devices is limited, which is common in edge environments. Existing methods overlap the gradient computation and communication to hide the communication latency to accelerate the FL training. However, the overlapping can also lead to an inevitable gap between the local model in each device and the global model in the server that seriously restricting the convergence rate of learning process. To address this problem, we propose a new overlapping method for FL, AOCC-FL, which aligns the local model with the global model via calibrated compensation such that the communication delay can be hidden without deteriorating the convergence performance. Theoretically, we prove that AOCC-FL admits the same convergence rate as the non-overlapping method. On both simulated and testbed experiments, we show that AOCC-FL achieves a comparable convergence rate relative to the non-overlapping method while outperforming the state-of-the-art overlapping methods.
Speaker Haozhao Wang

Haozhao Wang is currently doing postdoctoral research in the School of Computer Science and Technology at Huazhong University of Science and Technology. He obtained his Ph.D. from the same university in 2021 and obtained his bachelor's degree from the University of Electronic Science and Technology. He was a research assistant in the Department of Computing at The Hong Kong Polytechnic University. His research interests include Edge Learning and Federated Learning.


Joint Edge Aggregation and Association for Cost-Efficient Multi-Cell Federated Learning

Tao Wu (National University of Defense Technology, China); Yuben Qu (Nanjing University of Aeronautics and Astronautics, China); Chunsheng Liu (National University of Defense Technology, China); Yuqian Jing (Nanjing University Of Aeronautics And Astronautics, China); Feiyu Wu (Nanjing University of Aeronautics and Astronautics, China); Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China); Chao Dong (Nanjing University of Aeronautics and Astronautics, China); Jiannong Cao (Hong Kong Polytechnic Univ, Hong Kong)

0
Federated learning (FL) has been proposed as a promising distributed learning paradigm to realize edge artificial intelligence (AI) without revealing the raw data. Nevertheless, it would incur inevitable costs in terms of training latency and energy consumption, due to periodical communication between user equipments (UEs) and the remote central parameter server. Thus motivated, we study the joint edge aggregation and association problem to minimize the total cost, where the model aggregation over multiple cells just happens at the network edge. After proving its hardness with coupled variables, we transform it into a set function optimization problem that the objective function is neither submodular nor supermodular, which further complicates the problem. To tackle this difficulty, we first split it into multiple edge association subproblems, where the optimal computation resource allocation can be obtained in the closed form. We then construct a substitute function with the supermodularity and provable upper bound. On this basis, we reformulate an equivalent set function minimization problem under a matroid base constraint. We propose an approximation algorithm to the original problem based on the two-stage search strategy with theoretical performance guarantee. Both extensive simulations and field experiments are conducted to validate the effectiveness of our proposed solution.
Speaker Tao Wu (National University of Defense Technology, China



Session Chair

Jun Li

Session G-10

Miscellaneous

Conference
3:30 PM — 5:00 PM EDT
Local
May 19 Fri, 3:30 PM — 5:00 PM EDT
Location
Babbio 221

CLP: A Community based Label Propagation Framework for Multiple Source Detection

Chong Zhang and Luoyi Fu (Shanghai Jiao Tong University, China); Fei Long (Chinaso, China); Xinbing Wang (Shanghai Jiaotong University, China); Chenghu Zhou (Shanghai Jiao Tong University, China)

1
Given an aftermath of an information spreading, i.e., an infected network after the propagation of malicious rumors, malware or viruses, how can we identify the sources of the cascade? Answering this problem, which is known as multiple source detection (MSD) problem, is critical whether for forensic use or insights to prevent future epidemics.
Despite recent considerable effort, most of them are built on a preset propagation model, which limits their application range. Some attempts aim to break this limitation via a label propagation scheme where the nodes surrounded by large proportions of infected nodes are highlighted. Nonetheless, the detection accuracy may suffer since the node labels are simply integers with all infected or uninfected nodes sharing the same initialization setting respectively, which fall short of sufficiently distinguishing the structural properties of them. To this end, we propose a community based label propagation (CLP) framework that locates multiple sources through exploiting the community structures formed by infected subgraph of different sources. Besides, CLP tries to enhance the detection accuracy by incorporating node prominence and exoneration effects. As such, CLP is applicable in more propagation models. Experiments on both synthetic and real-world networks further validate the superiority of CLP to the state-of-the-art.
Speaker Chong Zhang

Chong Zhang received his B.E. degree in Telecommunications Engineering from Xidian University, China, in 2018. He is currently pursuing the Ph.D. degree in Department of Electronic Engineering in Shanghai Jiao Tong University, Shanghai, China. His research of interests are in the area of social networks and data mining. 



GinApp: An Inductive Graph Learning based Framework for Mobile Application Usage Prediction

Zhihao Shen, Xi Zhao and Jianhua Zou (Xi'an Jiaotong University, China)

0
Mobile application usage prediction aims to infer the possible applications (Apps) that a user will launch next. It is critical for many applications, e.g., system optimization and smartphone resource management. Recently, graph based App prediction approaches have been proved effective, but still suffer from several issues. First, these studies cannot naturally generalize to unseen Apps. Second, they do not model asymmetric transitions between Apps. Third, they are hard to differentiate the contributions of different App usage context on the prediction result. In this paper, we propose GinApp, an inductive graph representation learning based framework, to resolve these issues. Specifically, we first construct an attribute-aware heterogeneous directed graph based on App usage records, where the App-App transitions and times are well modeled by directed weighed edges. Then, we develop an inductive graph learning based method to generate effective node representations for the unseen Apps via sampling and aggregating the information from neighboring nodes. Finally, our App usage prediction problem is reformulated as a link prediction problem on graph to generate the Apps with the largest probabilities as prediction results. Extensive experiments on two large-scale App usage datasets reveal that GinApp provides the state-of-the-art performance for App usage prediction.
Speaker Zhihao Shen (Xi'an Jiaotong University)

Zhihao Shen received his B.E. degree in automation engineering from School of Electronic and Information, Xi'an Jiaotong University, Xi'an, China, in 2016, where he is currently pursuing the Ph.D. degree with the Systems Engineering Institute. His research interests include mobile computing, big data analytics, and deep learning.


Cost-Effective Live Expansion of Three-Stage Switching Networks without Blocking or Connection Rearrangement

Takeru Inoue and Toru Mano (NTT Network Innovation Labs., Japan); Takeaki Uno (National Institute of Informatics, Japan)

0
The rapid growth of datacenter traffic requires network expansion without interrupting communications within and between datacenters. Past studies on network expansion while carrying live traffic have focused on packet-switched networks such as Ethernet. Optical networks have been attracting attention recently for datacenters due to their great transmission capacity and power efficiency, but they are modeled as circuit-switched networks. To our knowledge, no practical live expansion method is known for circuit-switched networks; the Clos network, a nonblocking three-stage switching structure, can be expanded, but it involves connection rearrangement (interruption) or significant initial investment. This paper proposes a cost-effective network expansion method without relying on connection rearrangement. Our method expands a network by rewiring inactive links to include additional switches, which also avoids the blocking of new connection requests. Our method is designed to only require switches whose number is proportional to the network size, which suppresses the initial investment. Numerical experiments show that our method incrementally expands a circuit-switched network from 1,000 to 30,000 ports, sufficient to accommodate all racks in a today's huge datacenter. The initial structure of our method only consists of three 1024x1024 switches, while that of a reference method requires 34 switches.
Speaker Takeru Inoue (NTT Labs, Japan)

Takeru Inoue is a Distinguished Researcher at Nippon Telegraph and Telephone Corporation (NTT) Laboratories, Japan. He received the B.E. and M.E. degrees in engineering science and the Ph.D. degree in information science from Kyoto University, Japan, in 1998, 2000, and 2006, respectively. In 2000, he joined NTT Laboratories. From 2011 to 2013, he was an ERATO Researcher with the Japan Science and Technology Agency, where his research focused on algorithms and data structures. Currently, his research interests widely cover the reliable design of communication networks. Inoue was the recipient of several prestigious awards, including the Best Paper Award of the Asia-Pacific Conference on Communications in 2005, the Best Paper Award of the IEEE International Conference on Communications in 2016, the Best Paper Award of IEEE Global Communications Conference in 2017, the Best Paper Award of IEEE Reliability Society Japan Joint Chapter in 2020, the IEEE Asia/Pacific Board Outstanding Paper Award in 2020, and the IEICE Paper of the Year in 2021. He serves as an Associate Editor of the IEEE Transactions on Network and Service Management.


ASR: Efficient and Adaptive Stochastic Resonance for Weak Signal Detection

Xingyu Chen, Jia Liu, Xu Zhang and Lijun Chen (Nanjing University, China)

0
Stochastic resonance (SR) provides a new way for weak-signal detection by boosting undetectable signals with added white noise. However, existing work has to take a long time to search optimal parameter settings for SR, which cannot fit well some practical applications. In this paper, we propose an adaptive SR scheme (ASR) that can amplify the original signal at a low cost in time. The basic idea is that we find that the potential parameter is a key factor that determines the performance of SR. By treating the system as a feedback loop, we can dynamically adjust the potential parameters according to the output signals and make SR happens adaptively. ASR answered two technical questions: how can we evaluate the output signal and how can we tune the potential parameters quickly towards the optimal. In ASR, we first design a spectral-analysis based solution to examine whether SR happens using continuous wavelet transform. After that, we reduce the parameter tuning problem to a constrained non-linear optimization problem and use the sequential quadratic programming to iteratively optimize the potential parameters. We implement ASR and apply it to assist respiration-rate detection and machinery fault diagnosis. Extensive experiments show that ASR outperforms the state-of-the-art.
Speaker Xingyu Chen (Nanjing University)

Xingyu Chen is currently a Ph.D. student with the Department of Computer Science and Technology at Nanjing University of China. His research interests focus on indoor localization and RFID system. He is a student member of the IEEE.


Session Chair

Zhangyu Guan


Gold Sponsor


Gold Sponsor


Bronze Sponsor


Student Travel Grants


Student Travel Grants


Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.