IEEE INFOCOM 2024

Session A-8

A-8: Mobile Networks and Applications

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 11:30 AM — 1:00 PM EDT
Location
Regency A

AIChronoLens: Advancing Explainability for Time Series AI Forecasting in Mobile Networks

Claudio Fiandrino, Eloy Pérez Gómez, Pablo Fernández Pérez, Hossein Mohammadalizadeh, Marco Fiore and Joerg Widmer (IMDEA Networks Institute, Spain)

0
Next-generation mobile networks will increasingly rely on the ability to forecast traffic patterns for resource management. Usually, this translates into forecasting diverse objectives like traffic load, bandwidth, or channel spectrum utilization, measured over time. Among the other techniques, Long-Short Term Memory (LSTM) proved very successful for this task. Unfortunately, the inherent complexity of these models makes them hard to interpret and, thus, hampers their deployment in production networks. To make the problem worsen, EXplainable Artificial Intelligence (XAI) techniques, which are primarily conceived for computer vision and natural language processing, fail to provide useful insights: they are blind to the temporal characteristics of the input and only work well with highly rich semantic data like images or text. In this paper, we take the research on XAI for time series forecasting one step further proposing AIChronoLens, a new tool that links legacy XAI explanations with the temporal properties of the input. In such a way, AIChronoLens makes it possible to dive deep into the model behavior and spot, among other aspects, the hidden cause of errors. Extensive evaluations with real-world mobile traffic traces pinpoint model behaviors that would not be possible to spot otherwise and model performance can increase by 32%.
Speaker
Speaker biography is not available.

Characterizing 5G Adoption and its Impact on Network Traffic and Mobile Service Consumption

Sachit Mishra and André Felipe Zanella (IMDEA Networks Institute, Spain); Orlando E. Martínez-Durive (IMDEA Networks Institute & Universidad Carlos III de Madrid, Spain); Diego Madariaga (IMDEA Networks Institute, Spain); Cezary Ziemlicki (Orange labs, France); Marco Fiore (IMDEA Networks Institute, Spain)

0
The roll out of 5G, coupled with the traffic monitoring capabilities of modern industry-grade networks, offers an unprecedented opportunity to closely observe the impact that the introduction of a new major wireless technology has on the end users. In this paper, we seize such a unique chance, and carry out a first-of-its-kind in-depth analysis of 5G adoption along spatial, temporal and service dimensions. Leveraging massive measurement data about application-level demands collected in a nationwide 4G/5G network, we characterize the impact of the new technology on when, where and how mobile subscribers consume 5G traffic both in aggregate and for individual types of services. This lets us unveil the overall incidence of 5G in the total mobile network traffic, its spatial and temporal fluctuations, its effect on the way 5G services are consumed, the way individual services and geographical locations contribute to fluctuations in the 5G demand, as well as surprising connections between socioeconomic status of local populations and the way the 5G technology is presently consumed.
Speaker
Speaker biography is not available.

Exploiting Multiple Similarity Spaces for Efficient and Flexible Incremental Update of Mobile Applications

Lewei Jin (ZheJiang University, China); Wei Dong, Jiang BoWen, Tong Sun and Yi Gao (Zhejiang University, China)

0
Mobile application updates occur frequently, and they continue to add considerable traffic over the Internet. Differencing algorithms, which compute a small delta between the new version and old version, are often employed to reduce the update overhead. Transforming the old and new files into the decoded similarity spaces can drastically reduce the delta size. However, this transformation is often hindered by two practical reasons: (1) insufficient decoding. (2) long recompression time. To address this challenge, we have proposed two general approaches to transforming the compressed files into the full decoded similarity space and partial decoded similarity space, with low recompression time. The first approach uses recompression-aware searching mechanism, based on a general full decoding tool to transform deflate stream to the full decoded similarity space with a configurable searching complexity. The second approach uses a novel solution to transform a deflate stream into the partial decoded similarity space with differencing-friendly LZ77 token reencoding. We have also proposed an algorithm called MDiffPatch to exploit the full and partial decoded similarity spaces. Extensive evaluation results show that MDiffPatch achieves lower compression ratio than state-of-the-art algorithms and its tunable parameter allows us to achieve a good tradeoff between compression ratio and recompression time.
Speaker Lewei Jin (Zhejiang University)

Lewei Jin graduated with a bachelor's degree from Hangzhou University of Electronic Science and Technology. I am currently pursuing a PhD in Software Engineering at Zhejiang University, with a research interest in mobile application security.


LoPrint: Mobile Authentication of RFID-Tagged Items Using COTS Orthogonal Antennas

Yinan Zhu (The Hong Kong University of Science and Technology (HKUST), Hong Kong); Qian Zhang (Hong Kong University of Science and Technology, Hong Kong)

0
Authenticating RFID-tagged items during mobile inventory is a critical task for anti-counterfeiting. However, past authentication solutions using commercial off-the-shelf (COTS) devices cannot be applied in mobile scenarios, due to either high latency or non-robustness to tag movement. This paper introduces LoPrint, the first system to effectively authenticate mobile tagged items using the COTS orthogonal antennas existing in most infrastructures. The key insight of LoPrint is to randomly attach multiple tags on each item as a tag group and leverage the stable layout relationships of this tag group as novel fingerprints, including the relative distance matrix (RDM) and relative orientation matrix (ROM). Additionally, a new hardware fingerprint called cross-polarization ratio (CPR) is proposed to help distinguish the tag category. Furthermore, a lightweight approach is designed to robustly extract RDM, ROM, and CPR from RSSI and phase sequences. LoPrint is prototyped and deployed on a conveyor in a lab environment and a tunnel in a real-world RFID warehouse, where 726 tagged items with random layouts are used for evaluation. Experimental results show that LoPrint can achieve a high authentication accuracy of 82.92% on the fixed conveyor and 79.48% on the random warehouse trolley, outperforming the transferred state-of-the-art solution by over 10x.
Speaker Yinan Zhu (Hong Kong University of Science and Technology)

Yinan Zhu is currently a PhD candidate at the Department of Computer Science and Engineering, Hong Kong University of Science and Technology (HKUST).


Session Chair

Ruozhou Yu (North Carolina State University, USA)

Enter Zoom
Session B-8

B-8: Streaming Systems

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 11:30 AM — 1:00 PM EDT
Location
Regency B

Scout Sketch: Finding Promising Items in Data Streams

Tianyu Ma, Guoju Gao, He Huang, Yu-e Sun and Yang Du (Soochow University, China)

0
This paper studies a new but important pattern for items in data streams, called promising items. The promising items mean that the frequencies of an item in multiple continuous time windows show an upward trend overall, while a slight decrease in some of these windows is allowed. Many practical applications can benefit from the property of promising items, e.g., detecting potential hot events or news in social networks, preventing network congestion in communication channels, and monitoring latent attacks in computer networks. To accurately find promising items in data streams in real-time under limited memory space, we propose a novel structure named Scout Sketch, which consists of Filter and Finder. Filter is devised based on the Bloom filter to eliminate the ungratified items with less memory overload; Finder records some necessary information about the potential items and detects the promising items at the end of each time window, where we propose some tailor-made detection operations. We also analyze the theoretical performance of Scout Sketch. Finally, we conducted extensive experiments based on four real-world datasets. The experimental results show that the F1 Score and throughput of Scout Sketch are about 2.02 and 7.23 times that of the compared solutions, respectively.
Speaker
Speaker biography is not available.

Exstream: A Delay-minimized Streaming System with Explicit Frame Queueing Delay Measurement

Shinik Park, Sanghyun Han, Junseon Kim and Jongyun Lee (Seoul National University, Korea (South)); Sangtae Ha (University of Colorado Boulder, USA); Kyunghan Lee (Seoul National University, Korea (South))

0
Network fluctuations can cause unpredictable degradation of the user's quality of experience (QoE) on real-time video streaming. The intrinsic property of real-time video streaming, which generates delay-sensitive and chunk-based video frames, makes the situation even more complicated. Although previous approaches have tried to alleviate this problem by controlling the video bitrate based on the current network capacity estimate, they do not take into account the explicit queueing delay experienced by the video frame in determining the bitrate of upcoming video frames. To tackle this problem, we propose a new real-time video streaming system, Exstream, that can adapt to dynamic network conditions with the help of video bitrate control method and bandwidth estimation method designed to support real-time video streaming environments. \system explicitly estimates the queueing delay experienced by the video frame based on the transmission time budget that each frame can maximally utilize, which depends on the frame generation interval, and adjusts the bitrate of newly generated video frames to suppress the queueing delay level close to zero. Our comprehensive experiments demonstrate that Exstream achieves lower frame delay than four existing systems, Salsify, WebRTC, Skype, and Hangouts without frequent video frame skip.
Speaker Shinik Park (Seoul National University)



Emma: Elastic Multi-Resource Management for Realtime Stream Processing

Rengan Dou, Xin Wang and Richard T. B. Ma (National University of Singapore, Singapore)

0
In stream processing applications, an operator is often instantiated into multiple parallel execution instances, referred to as executors, to facilitate large-scale data processing. Due to unpredictable changes in executor workloads, data tuples processed by different executors may exhibit varying latency. The executor with the maximum latency significantly impacts the end-to-end latency. Existing solutions, such as load balancing and horizontal scaling, which involve workload migration, often incur substantial time overhead. In contrast, elastically scaling up/down resources of executors can offer rapid adaptability; however, prior works only considered CPU scaling.

This paper presents Emma, an elastic multi-resource manager. The core of Emma is a multi-resource provisioning plan that conducts performance analysis and resource adjustment in real-time. We explore the relationship between resources and performance experimentally and theoretically, guiding the plan to adaptively allocate the appropriate combination of resources to 1) accommodate the dynamic workload; 2) efficiently utilize resources to enhance the performance of as many executors as possible. Additionally, we propose an online learning method that makes the manager seamlessly adapt to diverse stream applications. We integrate Emma with Apache Samza, and our experiments show that compared to existing solutions, Emma can significantly reduce latency by orders of magnitude in real-world applications.
Speaker Rengan Dou (National University of Singapore)

Rengan Dou is a Ph.D student from the National University of Singapore. He received his bachelor's degree from the University of Science and Technology of China. His research interests include cloud computing, edge computing, and stream processing.


A Multi-Agent View of Wireless Video Streaming with Delayed Client-Feedback

Nouman Khan (University of Michigan, USA); Ujwal Dinesha (Texas A&M University, USA); Subrahmanyam Arunachalam (Texas A and M University, USA); Dheeraj Narasimha (Texas A&M University, USA); Vijay Subramanian (University of Michigan, USA); Srinivas G Shakkottai (Texas A&M University, USA)

0
We study the optimal control of multiple video streams over a wireless downlink from a base-transceiver-station (BTS)/access point to N end-devices (EDs). The BTS sends video packets to each ED under a joint transmission energy constraint, the EDs choose when to play out the received packets, and the collective goal is to provide a high Quality-of-Experience (QoE) to the clients/end-users. All EDs send feedback about their states and actions to the BTS which reaches it after a fixed deterministic delay. We analyze this team problem with delayed feedback as a cooperative Multi-Agent Constrained Partially Observable Markov Decision Process (MA-C-POMDP).

First, using a recently established strong duality result for MA-C-POMDPs, the original problem is decomposed into N independent unconstrained transmitter-receiver (two-agent) problems---all sharing a Lagrange multiplier (that also needs to be optimized for optimal control). Thereafter, the common information (CI) approach and the formalism of approximate information states (AISs) are used to guide the design of a neural-network based architecture for learning-based multi-agent control in a single unconstrained transmitter-receiver problem. Finally, simulations on a single transmitter-receiver pair with a stylized QoE model are performed to highlight the advantage of delay-aware two-agent coordination over the transmitter choosing both transmission and play-out actions (perceiving the delayed state of the receiver as its current state).
Speaker
Speaker biography is not available.

Session Chair

Srikanth V. Krishnamurthy (University of California, Riverside, USA)

Enter Zoom
Session C-8

C-8: Staleness and Age of Information (AoI)

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 11:30 AM — 1:00 PM EDT
Location
Regency C

An Analytical Approach for Minimizing the Age of Information in a Practical CSMA Network

Suyang Wang, Oluwaseun Ajayi and Yu Cheng (Illinois Institute of Technology, USA)

0
Age of information (AoI) is a crucial metric in modern communication systems, quantifying information freshness at its destination. This study proposes a novel and general approach utilizing stochastic hybrid systems (SHS) for AoI analysis and minimization in carrier sense multiple access (CSMA) networks. Specifically, we consider a practical yet general networking scenario where multiple nodes contend for transmission through a standard CSMA-based medium access control (MAC) protocol, and the tagged node under consideration uses a small transmission buffer for small AoI. We for the first time develop an SHS-based analytical model for this finite-buffer transmission system over the CSMA MAC. Moreover, we develop a creative method to incorporate the collision probability into the SHS model, with background nodes having heterogeneous traffic arrival rates. Our model enables us to analytically find the optimal sampling rate to minimize the AoI of the tagged node in a wide range of practical networking scenarios. Our analysis reveals insights into buffer size impacts when jointly optimizing throughput and AoI. The SHS model is cast over an 802.11-based MAC to examine the performance, with comparison to ns-based simulation results. The accuracy of the modeling and the efficiency of optimal sampling are convincingly demonstrated.
Speaker
Speaker biography is not available.

Reducing Staleness and Communication Waiting via Grouping-based Synchronization for Distributed Deep Learning

Yijun Li, Jiawei Huang, Zhaoyi Li, Jingling Liu, Shengwen Zhou, Wanchun Jiang and Jianxin Wang (Central South University, China)

0
Distributed deep learning has been widely employed to train deep neural network over large-scale dataset. However, the commonly used parameter server architecture suffers from long synchronization time in data-parallel training. Although the existing solutions are proposed to reduce synchronization overhead by breaking the synchronization barriers or limiting the staleness bound, they inevitably experience low convergence efficiency and long synchronization waiting. To address these problems, we propose Gsyn to reduce both synchronization overhead and staleness. Specifically, Gsyn divides workers into multiple groups. The workers in the same group coordinate with each other using the bulk synchronous parallel scheme to achieve high convergence efficiency, and each group communicates with parameter server asynchronously to reduce the synchronization waiting time, consequently increasing the convergence efficiency. Furthermore, we theoretically analyze the optimal number of groups to achieve a good tradeoff between staleness and synchronization waiting. The evaluation test in the realistic cluster with multiple training tasks demonstrates that Gsyn is beneficial and accelerates distributed training by up to 27% over the state-of-the-art solutions.
Speaker Yijun Li (Central South University)



An Easier-to-Verify Sufficient Condition for Whittle Indexability and Application to AoI Minimization

Sixiang Zhou (Purdue University, West Lafayette, USA); Xiaojun Lin (The Chinese University of Hong Kong, Hong Kong & Purdue University, West Lafayette (on Leave), USA)

0
We study a scheduling problem for a Base Station transmitting status information to multiple User Equipments (UE) with the goal of minimizing the total expected Age-of-Information (AoI). Such a problem can be formulated as a Restless Multi-Arm Bandit (RMAB) problem and solved asymptotically-optimally by a low-complexity Whittle index policy, if each UE's sub-problem is Whittle indexable. However, proving Whittle indexability can be highly non-trivial, especially when the value function cannot be derived in closed-form. In particular, this is the case for the AoI minimization problem with stochastic arrivals and unreliable channels, whose Whittle indexability remains an open problem. To overcome this difficulty, we develop a sufficient condition for Whittle indexability based on the notion of active time (AT). Even though the AT condition shares considerable similarity to the Partial Conservation Law (PCL) condition, it is much easier to understand and verify. We then apply our AT condition to the stochastic-arrival unreliable-channel AoI minimization problem and, for the first time in the literature, prove its Whittle indexability. Our proof uses a novel coupling approach to verify the AT condition, which may also be of independent interest to other large-scale RMAB problems.
Speaker
Speaker biography is not available.

Joint Optimization of Model Deployment for Freshness-Sensitive Task Assignment in Edge Intelligence

Haolin Liu and Sirui Liu (Xiangtan University, China); Saiqin Long (Jinan University, China); Qingyong Deng (Guangxi Normal University, China); Zhetao Li (Jinan University, China)

1
Edge Intelligence aims to push deep learning (DL) services to network edge to reduce response time and protect privacy. In implementations, proximity deployment of DL models and timely updates can improve the quality of experience (QoE) for users, but increase the operation cost as well as pose a challenge for task assignment. To address the challenge, a joint online optimization problem for DL model deployment (including placement and update) and freshness-sensitive task assignment is formulated to improve QoE and application service provider (ASP) profit. In the problem, we introduce the age of information (AOI) to quantify the freshness of the DL model and represent user QoE as an AOI based utility function. To solve the problem, an online model placement, update, and task assignment (MPUTA) algorithm is proposed. It first converts the time-slot coupled problem into a single time-slot problem using regularization technique, and decomposes the single time-slot problem into model deployment and task assignment subproblems. Then, using randomized round technique to deal with the model deployment subproblem and the graph matching technique to solve the task assignment subproblem. In simulation experiments, MPUTA is shown to outperform other benchmark algorithms in terms of both user QoE and ASP profit.
Speaker Sirui Liu(Xiangtan University)

SiRui Liu received the B.Eng degree from WuHan Polytechnic  University, China, in 2021,and is currently pursuing the Master's degree in Computer Technology at Xiangtan University in China. His research interests include edge intelligence and dynamic deep learning model deployment.


Session Chair

Hongwei Zhang (Iowa State University, USA)

Enter Zoom
Session D-8

D-8: Backscatter Networking

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 11:30 AM — 1:00 PM EDT
Location
Regency D

TRIDENT: Interference Avoidance in Multi-reader Backscatter Network Via Frequency-space Division

Yang Zou (TsingHua University, China); Xin Na (Tsinghua University, China); Xiuzhen Guo (Zhejiang University, China); Yimiao Sun and Yuan He (Tsinghua University, China)

0
Backscatter is an enabling technology for battery-free sensing in industrial IoT applications. For the purpose of full coverage of numerous tags in the deployment area, one often needs to deploy multiple readers, each of which is to communicate with tags within its communication range. But the actual backscattered signals from a tag are likely to reach a reader outside its communication range, causing undesired interference. Conventional approaches for interference avoidance, either TDMA or CSMA based, separate the readers' media accesses in the time dimension and suffer from limited network throughput. In this paper, we propose TRIDENT, a novel backscatter tag design that enables interference avoidance with frequency-space division. By incorporating a tunable bandpass filter and multiple terminal loads, a TRIDENT tag is able to detect its channel condition and adaptively adjust the frequency band and the power of its backscattered signals, so that all the readers in the network can operate concurrently without being interfered. We implement TRIDENT and evaluate its performance under various settings. The results demonstrate that TRIDENT enhances the network throughput by 3.18×, compared to the TDMA based scheme.
Speaker Yang Zou (Tsinghua University)

Yang Zou is currently a PhD. student at Tsinghua University. He received his B.E. degree from the Beijing University of Aeronautics and Astronautics (BUAA). His research interests include wireless networking and communication.


ConcurScatter: Scalable Concurrent OFDM Backscatter Using Subcarrier Pattern Diversity

Caihui Du (Beijing Institute of Techology, China); Jihong Yu (Beijing Institute of Technology, China); Rongrong Zhang (Capital Normal University, China); Jianping An (Beijing Institute of Technology, China)

0
Ambient OFDM backscatter communication has attracted considerable research efforts. Yet the prior works focus on point-to-point backscatter from a single tag, leaving behind efficient backscatter networking of multiple tags. In this paper, we design and implement ConcurScatter, the first ambient OFDM backscatter system that scales to concurrent transmission of hundreds of tags. Our key innovation is building and using the subcarrier pattern diversity to distinguish concurrent tags. This would yield linear collision states rather than exponential ones in the prior works based on the IQ domain diversity, supporting more concurrent transmission. We concrete this by designing a suit of techniques including midair frequency synthesis that forms a unique subcarrier pattern for each concurrent tag, non-integer cyclic shift that contributes to support more concurrent tags, and subcarrier pattern reconstruction that creates virtual subcarriers to enable single-symbol parallel decoding. The testbed experiment confirms that ConcurScatter supports seven more concurrent tags with similar BER and 8.4x higher throughput than the point-to-point backscatter RapidRider. The large-scale simulation shows that ConcurScatter supports 200 tags which is 40x more than the state-of-the-art concurrent OFDM backscatter FreeCollision.
Speaker Caihui Du (Beijing institue of technology)

Caihui Du received the B.E. degree from Beijing institute of technology, Beijing, China, in 2021. She is currently pursuing the Ph.D. degree with the School of Information and Electronics, Beijing Institute of Technology. Her research interests include ambient backscatter communication, and Internet-of-Things applications.


Efficient LTE Backscatter with Uncontrolled Ambient Traffic

Yifan Yang, Yunyun Feng and Wei Gong (University of Science and Technology of China, China); Yu Yang (City University of Hong Kong, Hong Kong)

0
Ambient LTE backscatter is a promising way to enable ubiquitous wireless communication with ultra-low power and cost. However, modulation in previous LTE backscatter systems relies heavily on the original data (content) of the signals.
They either demodulate tag data using an additional receiver to provide the content of the excitation or modulate on a few predefined reference signals in random ambient LTE traffic.
This paper presents CABLTE, a content-agnostic backscatter system that efficiently utilizes uncontrolled LTE PHY resources for backscatter communication using a single receiver. Our system is superior to prior work in two aspects: 1) Using one receiver to obtain tag data makes CABLTE more practical in real-world applications, and 2) Efficient modulation on LTE PYH resources improves the data rate of backscatter communication.
To obtain the tag data without knowing the ambient content, we design a checksum-based codeword translation method. We also propose a customized channel estimation scheme and a signal identification component in the backscatter system to ensure our accurate modulation and demodulation. Extensive experiments show that our CABLTE provides maximum tag throughput of 22 kbps, which is 3.67x higher than the content-agnostic system CAB and even 1.38x higher than the content-based system SyncLTE
Speaker Yifan Yang (University of Science and Technology of China)

Yifan Yang received his B.S. degree from the School of Computer Science and Technology, University of Science and Technology of China, Anhui Province, China in 2021. He is currently pursuing the Ph.D. degree at the School of Computer Science and Technology, University of Science and Technology of China, Anhui Province, China. He is also a joint Ph.D. student at the School of Data Science, City University of Hong Kong. His research interests include wireless networks and IoT.


Efficient Two-Way Edge Backscatter with Commodity Bluetooth

Maoran Jiang (University of Science and Technology of China, China); Xin Liu (The Ohio State University, USA); Li Dong (Macau University of Science and Technology, Macao); Wei Gong (University of Science and Technology of China, China)

0
Two-way backscatter is essential to general-purpose backscatter communication as it provides rich interaction to support diverse applications on commercial devices. However, existing Bluetooth backscatter systems suffer from unstable uplinks due to poor carrier-identification capability and inefficient downlinks caused by packet-length modulation. This paper proposes EffBlue, an efficient two-way backscatter design for commercial Bluetooth devices. EffBlue employs a simple edge backscatter server that alleviates the computational burden on the tag and helps build efficient uplinks and downlink. Specifically, efficient uplinks are designed by introducing an accurate synchronization scheme, which can effectively eliminate the use of non-compliant packets as carriers. To break the limitation of packet-level modulation, we design a new symbol-level WiFi-ASK downlink where the edge sends ASK-like WiFi signals and the tag can decode such signals using a simple envelope detector. We prototype the edge server using commodity WiFi and Bluetooth chips and build two-way backscatter tags with FPGAs. Experimental results show that EffBlue can identify the target excitations with more than 99% precision. Meanwhile, its WiFi-ASK downlink can achieve up to 124 kbps, which is 25x better than FreeRider.
Speaker Maoran Jiang (University of Science and Technology of China)

Maoran Jiang is a third-year Computer Science Ph.D. student at the University of Science and Technology of China. His research interests lie in wireless networks and ultra-low power systems.


Session Chair

Fernando A. Kuipers (Delft University of Technology, The Netherlands)

Enter Zoom
Session E-8

E-8: Machine Learning 2

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 11:30 AM — 1:00 PM EDT
Location
Regency E

Deep Learning Models As Moving Targets To Counter Modulation Classification Attacks

Naureen Hoque and Hanif Rahbari (Rochester Institute of Technology, USA)

0
Malicious entities use advanced modulation classification (MC) techniques to launch traffic analysis, selective jamming, evasion, and poison attacks. Recent studies show that current defense mechanisms against such attacks are static in nature and vulnerable to persistent adversaries who invest time and resources into learning the defenses, thereby being able to design and execute more sophisticated attacks to circumvent them. In this paper, we present a moving-target defense framework to support a novel modulation-masking mechanism we develop against advanced and persistent modulation classification attacks. The modulated symbols are masked using small perturbations before transmission to make it appear as if from another modulation scheme. By deploying a pool of deep learning models and perturbation generating techniques, the defense strategy keeps changing (moving) them when needed, making it difficult for adversaries to keep up with the defense system's changes over time. We show that the overall system performance remains unaffected under our technique. We further demonstrate that our masking technique, in addition to other existing defenses, can be learned and circumvented over time by a persistent adversary unless a moving target defense approach is adopted.
Speaker
Speaker biography is not available.

Deep Learning-based Modulation Classification of Practical OFDM signals for Spectrum Sensing

Byungjun Kim (UCSD, USA); Peter Gerstoft (University of California, San Diego, USA); Christoph F Mecklenbräuker (TU Wien, Austria)

0
In this study, the modulation of symbols on OFDM subcarriers is classified for transmissions following Wi-Fi 6 and 5G downlink specifications. First, our approach estimates the OFDM symbol duration and cyclic prefix length based on the cyclic autocorrelation function. We propose a feature extraction algorithm characterizing the modulation of OFDM signals, which includes removing the effects of a synchronization error. The obtained feature is converted into a 2D histogram of phase and amplitude and this histogram is taken as input to a convolutional neural network (CNN)-based classifier. The classifier does not require prior knowledge of protocol-specific information such as Wi-Fi preamble or resource allocation of 5G physical channels. The classifier's performance, evaluated using synthetic and real-world measured over-the-air (OTA) datasets, achieved a minimum accuracy of 97% accuracy with OTA data when SNR is above the value required for data transmission.
Speaker
Speaker biography is not available.

Resource-aware Deployment of Dynamic DNNs over Multi-tiered Interconnected Systems

Chetna Singhal (Indian Institute of Technology Kharagpur, India); Yashuo Wu (University of California Irvine, USA); Francesco Malandrino (CNR-IEIIT, Italy); Marco Levorato (University of California, Irvine, USA); Carla Fabiana Chiasserini (Politecnico di Torino & CNIT, IEIIT-CNR, Italy)

0
The increasing pervasiveness of intelligent mobile applications requires to exploit the full range of resources offered by the mobile-edge-cloud network for the execution of inference tasks. However, due to the heterogeneity of such multi-tiered networks, it is essential to make the applications' demand amenable to the available resources while minimizing energy consumption. Modern dynamic deep neural networks (DNN) achieve this goal by designing multi-branched architectures where early exits enable sample-based adaptation of the model depth. In this paper, we tackle the problem of allocating sections of DNNs with early exits to the nodes of the mobile-edge-cloud system. By envisioning a 3-stage graph-modeling approach, we represent the possible options for splitting the DNN and deploying the DNN blocks on the multi-tiered network, embedding both the system constraints and the application requirements in a convenient and efficient way. Our framework - named Feasible Inference Graph (FIN) - can identify the solution that minimizes the overall inference energy consumption while enabling distributed inference over the multi-tiered network with the target quality and latency. Our results, obtained for DNNs with different levels of complexity, show that FIN matches the optimum and yields over 65% energy savings relative to a state-of-the-art technique for cost minimization.
Speaker Chetna Singhal

Chetna Singhal is working as Assistant Professor in Electronics and Communication Engineering department at IIT Kharagpur.


Jewel: Resource-Efficient Joint Packet and Flow Level Inference in Programmable Switches

Aristide Tanyi-Jong Akem (IMDEA Networks Institute, Spain & Universidad Carlos III de Madrid, Spain); Beyza Butun (Universidad Carlos III de Madrid & IMDEA Networks Institute, Spain); Michele Gucciardo and Marco Fiore (IMDEA Networks Institute, Spain)

0
Embedding machine learning (ML) models in programmable switches realizes the vision of high-throughput and low-latency inference at line rate. Recent works have made breakthroughs in embedding Random Forest (RF) models in switches for either packet-level inference or flow-level inference. The former relies on simple features from packet headers that are simple to implement but limit accuracy in challenging use cases; the latter exploits richer flow features to improve accuracy, but leaves early packets in each flow unclassified. We propose Jewel, an in-switch ML model based on a fully joint packet-and flow-level design, which takes the best of both worlds by classifying early flow packets individually and shifting to flow-level inference when possible. Our proposal involves (i) a single RF model trained to classify both packets and flows, and (ii) hardware-aware model selection and training techniques for resource footprint minimization. We implement Jewel in P4 and deploy it in a testbed with Intel Tofino switches, where we run extensive experiments with a variety of real-world use cases. Results reveal how our solution outperforms four state-of-the-art benchmarks, with accuracy gains in the 2.2%-5.3% range.
Speaker Beyza Bütün

Beyza Bütün is a Ph.D. student in the Networks Data Science Group at IMDEA Networks Institute in Madrid, Spain. She is part of the project ECOMOME, which aims to model and optimise the energy consumption of networks. She is also a Ph.D. student in the Department of Telematics Engineering at Universidad Carlos III de Madrid, Spain. She holds a bachelor's and master's degree in Computer Engineering from Middle East Technical University in Ankara, Turkey. During her master's, she worked on the optimal design of wireless data center networks. Beyza's current research interest is in-band network intelligence, distributed in-band programming, and energy consumption optimization in the data plane.


Session Chair

Marilia Curado (University of Coimbra, Portugal)

Enter Zoom
Session F-8

F-8: Internet Architectures and Protocols

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 11:30 AM — 1:00 PM EDT
Location
Regency F

Efficient IPv6 Router Interface Discovery

Tao Yang and Zhiping Cai (National University of Defense Technology, China)

0
Efficient discovery of router interfaces on the IPv6 Internet is critical for network measurement and cybersecurity. However, existing solutions commonly suffer from inefficiencies due to a lack of initial probing targets (seeds), ultimately imposing limitations on large-scale IPv6 networks. Therefore, it is imperative to develop a methodology that enables the efficient collection of IPv6 router interfaces with limited resources, considering the impracticality of conducting a brute-force exploration across the extensive IPv6 address space.
In this paper, we introduce Treestrace, an innovative asynchronous prober specifically designed for this purpose. Without prior knowledge of the networks, this tool incrementally adjusts search directions, automatically prioritizing the survey of IPv6 address spaces with a higher concentration of IPv6 router interfaces. Furthermore, we have developed a carefully crafted architecture optimized for probing performance, allowing the tool to probe at the highest theoretically possible rate without requiring excessive computational resources.
Real-world tests show that Treestrace outperforms state-of-the-art works on both seed-based and seedless tasks, achieving at least a 5.57-fold efficiency improvement on large-scale IPv6 router interface discovery. With Treestrace, we discovered approximately 8 million IPv6 router interface addresses from a single vantage point within several hours.
Speaker Tao Yang (National University of Defence Technology)

Tao Yang received his B.Sc. and M.Sc. degrees in computer science and technology from the National University of Defense Technology, China, in 2019 and 2021, respectively. He is currently pursuing a Ph.D. degree at the same institution. His research interests include IPv6 scanning and network security.


DNSScope: Fine-Grained DNS Cache Probing for Remote Network Activity Characterization

Jianfeng Li, Zheng Lin, Xiaobo Ma, Jianhao Li and Jian Qu (Xi'an Jiaotong University, China); Xiapu Luo (The Hong Kong Polytechnic University, Hong Kong); Xiaohong Guan (Xi'an Jiaotong University & Tsinghua University, China)

0
The domain name system (DNS) is indispensable to nearly every Internet service. It has been extensively utilized for network activity characterization in passive and active approaches. Compared to the passive approach, active DNS cache probing is privacy-preserving and low-cost, enabling worldwide characterization of remote network activities in different networks. Unfortunately, existing probing-based methods are too coarse-grained to characterize the time-varying features of network activities, substantially limiting their applications in time-sensitive tasks. In this paper, we advance DNSScope, a fine-grained DNS cache probing framework by tackling three challenges: sample sparsity, observational distortion, and cache entanglement. DNSScope synthesizes statistical learning and self-supervised transfer learning to achieve time-varying characterization. Extensive evaluations demonstrate that it can accurately estimate the time-varying DNS query arrival rates on recursive DNS resolvers. Its average mean absolute error is 0.124, as low as one-sixth that of the baseline methods.
Speaker Jianhao Li (Xi’an Jiaotong University)

Jianhao Li is currently working toward the M.E. degree in Computer Science and Technology from Xi'an Jiaotong University, Xi'an, China. His research interests include cyber security and network  measurement.


An Elemental Decomposition of DNS Name-to-IP Graphs

Alex Anderson, Aadi Swadipto Mondal and Paul Barford (University of Wisconsin - Madison, USA); Mark Crovella (Boston University, USA); Joel Sommers (Colgate University, USA)

0
The Domain Name System (DNS) is a critical piece of Internet infrastructure with remarkably complex properties and uses, and accordingly has been extensively studied. In this study we contribute to that body of work by organizing and analyzing records maintained within the DNS as a bipartite graph. We find that relating names and addresses in this way uncovers a surprisingly rich structure. In order to characterize that structure, we introduce a new graph decomposition for DNS name-to-IP mappings, which we term elemental decomposition. In particular, we argue that (approximately) decomposing this graph into bicliques - maximally connected components - exposes this rich structure. We utilize large-scale censuses of the DNS to investigate the characteristics of the resulting decomposition, and illustrate how the exposed structure sheds new light on a number of questions about how the DNS is used in practice and suggests several new directions for future research.
Speaker
Speaker biography is not available.

Silent Observers Make a Difference: A Large-scale Analysis of Transparent Proxies on the Internet

Rui Bian (Expatiate Communications, USA); Lin Jin (University of Delaware, USA); Shuai Hao (Old Dominion University, USA); Haining Wang (Virginia Tech, USA); Chase Cotton (University of Delaware, USA)

0
Transparent proxies are widely deployed on the Internet, bridging the communications between clients and servers and providing desirable benefits to both sides, such as load balancing, security monitoring, and privacy enhancement. Meanwhile, they work in a silent way as clients and servers may know their existence. However, due to their invisibility and stealthiness, transparent proxies remain understudied for their behaviors, suspicious activities, and potential vulnerabilities that could be exploited by attackers. To better understand transparent proxies, we design and develop a framework to systematically investigate them in the wild. We identify two major types of transparent proxies, named FDR and CPV, respectively. FDR is a type of transparent proxy that independently performs Forced DNS Resolution during interception. CPV is a type of transparent proxy that presents Cache Poisoning Vulnerability. We perform a large-scale measurement to detect each type of transparent proxy and examine their security implications. In total, we identify 32,246 FDR and 11,286 CPV transparent proxies. We confirm that these two types of transparent proxies are distributed globally --- FDRs are observed in 98 countries and CPVs are observed in 51 countries. Our work highlights the issues of vulnerable transparent proxies and provides insights for mitigating such problems.
Speaker
Speaker biography is not available.

Session Chair

Klaus Wehrle (RWTH Aachen University, Germany)

Enter Zoom
Session G-8

G-8: Ethereum Networks and Smart Contracts

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 11:30 AM — 1:00 PM EDT
Location
Prince of Wales/Oxford

LightCross: Sharding with Lightweight Cross-Shard Execution for Smart Contracts

Xiaodong Qi and Yi Li (Nanyang Technological University, Singapore)

0
Sharding is a prevailing solution to enhance the scalability of current blockchain systems. However, the cross-shard commit protocols adopted in these systems to commit cross-shard transactions commonly incur multi-round shard-to-shard communication, leading to low performance. Furthermore, most solutions only focus on simple transfer transactions without supporting complex smart contracts, preventing sharding from widespread applications. In this paper, we propose LightCross, a novel blockchain sharding system that enables the efficient execution of complex cross-shard smart contracts. First, LightCross offloads the execution of cross-shard transactions into off-chain executors equipped with the TEE hardware, which can accommodate execution for arbitrarily complex contracts. Second, we design a lightweight cross-shard commit protocol to commit cross-shard transactions without multi-round shard-to-shard communication between shards. Last, LightCross lowers the cross-shard transaction ratio by dynamically changing the distribution of contracts according to historical transactions. We implemented the LightCross prototype based on the FISCO-BCOS project and evaluated it in real-world blockchain environments, showing that LightCross can achieve 2.6x more throughput than state-of-the-art sharding systems.
Speaker
Speaker biography is not available.

ConFuzz: Towards Large Scale Fuzz Testing of Smart Contracts in Ethereum

Taiyu Wong, Chao Zhang and Yuandong Ni (Institute for Network Sciences and Cyberspace, Tsinghua University, China); Mingsen Luo (University of Electronic Science and Technology of China, China); HeYing Chen (University of Science and Technology of China, China); Yufei Yu (Tsinghua University, China); Weilin Li (University of Science and Technology of China, China); Xiapu Luo (The Hong Kong Polytechnic University, Hong Kong); Haoyu Wang (Huazhong University of Science and Technology, China)

0
Fuzzing is effective at finding vulnerabilities in traditional applications and has been adapted to smart contracts. However, existing fuzzing solutions for smart contracts are not smart enough and can hardly be applied to large-scale testing since they heavily rely on source code or ABI. In this paper, we propose a fuzzing solution ConFuzz applicable to large-scale testing, especially for bytecode-only contracts. ConFuzz adopts Adaptive Interface Recovery (AIR) and Function Information Collection (FIC) algorithms to automatically recover the function interfaces and information, supporting fuzzing smart contracts without source code or ABI. Furthermore, ConFuzz employs a Dependence-based Transaction Sequence Generation (DTSG) algorithm to infer dependencies of transactions and generate high-quality sequences to trigger the vulnerabilities. Lastly, ConFuzz utilizes taint analysis and function information to help detect harmful vulnerabilities and reduce false positives. The experiment shows that ConFuzz can accurately recover over 99.7% of function interfaces and reports more vulnerabilities than state-of-the-art solutions with 98.89% precision and 93.69% accuracy. On all 1.4M unique contracts from Ethereum, ConFuzz found over 11.92% vulnerable contracts. To the best of our knowledge, ConFuzz is the first efficient and scalable solution to test all smart contracts deployed in Ethereum.
Speaker Taiyu Wong(Tsinghua University)

Blockchain engineer at Tsinghua University


Deanonymizing Ethereum Users behind Third-Party RPC Services

Shan Wang, Ming Yang, Wenxuan Dai and Yu Liu (Southeast University, China); Yue Zhang (Drexel University, USA); Xinwen Fu (University of Massachusetts Lowell, USA)

0
Third-party RPC services have become the mainstream way for users to access Ethereum. In this paper, we present a novel deanonymization attack that can link an Ethereum address to a real-world identity such as IP address of a user who accesses Ethereum via a third-party RPC service. We find that RPC API calls result in distinguishable sizes of encrypted TCP packets. An attacker can then find when a user sends a transaction to a RPC provider and immediately send a beacon transaction after the user transaction. By exploiting the differences in the distributions of inter-arrival time intervals of normal transactions and two simultaneously initiated transactions, the attacker can identify the victim transaction in the Ethereum network. This enables the attacker to correlate the Ethereum address of the victim transaction's initiator with the source IP address of TCP packets from a victim user. We model the attack through empirical measurements and conduct extensive real-world experiments to validate the effectiveness of our attack. With three optimization strategies, the correlation accuracy can reach to 98.70% and 96.60% respectively in Ethereum testnet and mainnet. We are the first to study the deanonymization of Ethereum users behind third-party RPC services.
Speaker Shan Wang

Shan Wang is currently a Postdoctoral Fellow in the Department of Computing, The Hong Kong Polytechnic University. She obtained her Ph.D. degree from Southeast University. In the period from Sep. 2019 to May 2023, she studied at UMass Lowell, USA as a visiting scholar. Her past work mainly focuses on the security problems in permissioned blockchain. Currently, she is working on application of crypto in blockchain and de-anonymization in public blockchain.


DEthna: Accurate Ethereum Network Topology Discovery with Marked Transactions

Chonghe Zhao (Shenzhen University, China); Yipeng Zhou (Macquarie University, Australia); Shengli Zhang and Taotao Wang (Shenzhen University, China); Quan Z. Sheng (Macquarie University, Australia); Song Guo (The Hong Kong University of Science and Technology, Hong Kong)

0
In Ethereum, the ledger exchanges messages along an underlying Peer-to-Peer (P2P) network to reach consistency. Understanding the underlying network topology of Ethereum is crucial for network optimization, security and scalability. However, the accurate discovery of Ethereum network topology is non-trivial due to its deliberately designed security mechanism. Consequently, existing measuring schemes cannot accurately infer the Ethereum network topology with a low cost. To address this challenge, we propose the Distributed Ethereum Network Analyzer (DEthna) tool, which can accurately and efficiently measure the Ethereum network topology. In DEthna, a novel parallel measurement model is proposed that can generate marked transactions to infer link connections based on the transaction replacement and propagation mechanism in Ethereum. Moreover, a workload offloading scheme is designed so that DEthna can be deployed on multiple probing nodes so as to measure a large-scale Ethereum network at a low cost. We run DEthna on Goerli (the most popular Ethereum test network) to evaluate its capability in discovering network topology. The experimental results demonstrate that DEthna significantly outperforms the state-of-the-art baselines. Based on DEthna, we further analyze characteristics of the Ethereum blockchain network which reveals that there exist more than 50% low-degree Ethereum nodes weakening network robustness.
Speaker
Speaker biography is not available.

Session Chair

Wenhai Sun (Purdue University, USA)

Enter Zoom
Session Break-3-1

Coffee Break

Conference
10:00 AM — 10:30 AM PDT
Local
May 23 Thu, 1:00 PM — 1:30 PM EDT
Location
Regency Foyer & Hallway

Enter Zoom
Session A-9

A-9: Crowdsourcing and crowdsensing

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 1:30 PM — 3:00 PM EDT
Location
Regency A

Seer: Proactive Revenue-Aware Scheduling for Live Streaming Services in Crowdsourced Cloud-Edge Platforms

Shaoyuan Huang, Zheng Wang, Zhongtian Zhang and Heng Zhang (Tianjin University, China); Xiaofei Wang (Tianjin Key Laboratory of Advanced Networking, Tianjin University, China); Wenyu Wang (Shanghai Zhuichu Networking Technologies Co., Ltd., China)

0
As live streaming services skyrocket, Crowdsourced Cloud-edge service Platforms (CCPs) have surfaced as pivotal intermediaries catering to the mounting demand. Despite the role of stream scheduling to CCP's Quality of Service (QoS) and revenue, conventional optimization strategies struggle to enhancing CCP's revenue, primarily due to the intricate relationship between server utilization and revenue. Additionally, the substantial scale of CCPs magnifies the difficulties of time-intensive scheduling. To tackle these challenges, we propose Seer, a proactive revenue-aware scheduling system for live streaming services in CCPs. The design of Seer is motivated by meticulous measurements of real-world CCP environments, which allows us to achieve accurate revenue modeling and overcome three key obstacles that hinder the integration of prediction and optimal scheduling. Utilizing an innovative Pre-schedule-Execute-Re-schedule paradigm and flexible scheduling modes, Seer achieves efficient revenue-optimized scheduling in CCPs. Extensive evaluations demonstrate Seer's superiority over competitors in terms of revenue, utilization, and anomaly penalty mitigation, boosting CCP revenue by 147% and expediting scheduling 3.4 times faster.
Speaker Damien Saucez
Damien Saucez is a researcher at Inria Sophia Antipolis since 2011. He's current research interest is Software Defined Networking (SDN) with a particular focus on resiliency and robustness for very large networks. He's actively working to promote reproducibility in research by leading the ACM SIGCOMM 2017 Reproducibility Workshop and by chairing the ACM SIGCOMM and ACM CoNEXT Artifacts Evaluation Committee.

QUEST: Quality-informed Multi-agent Dispatching System for Optimal Mobile Crowdsensing

Zuxin Li, Fanhang Man and Xuecheng Chen (Tsinghua University, China); Susu Xu (Stony Brook University, USA); Fan Dang (Tsinghua University, China); Xiao-Ping (Steven) Zhang (Tsinghua Shenzhen Internation Graduate School, China); Xinlei Chen (Tsinghua University, China)

0
In this work, we address the challenges in achieving optimal Quality of Information (QoI) for non-dedicated vehicular Mobile Crowdsensing (MCS) systems, by utilizing vehicles not originally designed for sensing purposes to provide real-time data while moving around the city. These challenges include the coupled sensing coverage and sensing reliability, as well as the uncertainty and time-varying vehicle status. To tackle these issues, we propose QUEST, a QUality-informed multi-agEnt diSpaTching system, that ensures high sensing coverage and sensing reliability in non-dedicated vehicular MCS. QUEST optimizes QoI by introducing a novel metric called ASQ (aggregated sensing quality), which considers both sensing coverage and sensing reliability jointly. Additionally, we design a mutual-aided truth discovery dispatching method to estimate sensing reliability and improve ASQ under uncertain vehicle statuses. Real-world data from our deployed MCS system in a metropolis is used for evaluation, demonstrating that QUEST achieves up to 26% higher ASQ improvement, leading to reduction of reconstruction map errors by 32-65% for different reconstruction algorithms.
Speaker Ahmed Imteaj
Ahmed Imteaj is an Assistant Professor of the School of Computing at Southern Illinois University, Carbondale. He is the director of the Security, Privacy and Edge intElligence for Distributed networks Laboratory (SPEED Lab). He received his Ph.D. in Computer Science from Florida International University in 2022, earning the distinction of being an FIU Real Triumph Graduate, where he received his M.Sc. with the Outstanding Master's Degree Graduate Award. Prior to that, Ahmed received a B.Sc. degree in Computer Science and Engineering from Chittagong University of Engineering and Technology. His research interests encompass a wide range of fields, including Federated Learning, Generative AI, Interdependent Networks, Cybersecurity, and IoT. Ahmed has made significant contributions to the domains of privacy-preserving distributed machine learning and IoT, with his research published in prestigious conferences and peer-reviewed journals. He has received several accolades, such as the 2022 Outstanding Student Life: Graduate Scholar of the Year Award, 2021 Best Graduate Student in Research Award from FIU's Knight Foundation School of Computing and Information Sciences.

Combinatorial Incentive Mechanism for Bundling Spatial Crowdsourcing with Unknown Utilities

Hengzhi Wang, Laizhong Cui and Lei Zhang (Shenzhen University, China); Linfeng Shen and Long Chen (Simon Fraser University, Canada)

0
Incentive mechanisms in Spatial Crowdsourcing (SC) have been widely studied as they provide an effective way to motivate mobile workers to perform spatial tasks. Yet, most existing mechanisms only involve single tasks, neglecting the presence of complementarity and substitutability among tasks. This limits their effectiveness in practice cases. Motivated by this, we consider task bundles for incentive mechanism design and closely analyze the mutual exclusion effect that arises with task bundles. We then develop a combinatorial incentive mechanism, including three key policies: In the offline case, we propose a combinatorial assignment policy to address the conflict between mutual exclusion and assignment efficiency. We next study the conflict between mutual exclusion and truthfulness, and build a combinatorial pricing policy to pay winners that yields both incentive compatibility and individual rationality. In the online case with unknown workers' utilities, we present an online combinatorial assignment policy that balances the exploration-exploitation trade-off under the mutual exclusion constraints. Through theoretical analysis and numerical simulations using real-world mobile networking datasets, we demonstrate the effectiveness of the proposed mechanism.
Speaker
Speaker biography is not available.

Few-Shot Data Completion for New Tasks in Sparse CrowdSensing

En Wang, Mijia Zhang and Bo Yang (Jilin University, China); Yang Xu (Hunan University, China); Zixuan Song and Yongjian Yang (Jilin University, China)

0
Mobile Crowdsensing is a type of technology that utilizes mobile devices and volunteers to gather data about specific topics at large scales in real-time. However, in practice, limited participation leads to missing data, i.e., the collected data may be sparse, which makes it difficult to perform accurate analysis. A possible technique called sparse crowdsensing incorporates the sparse case with data completion, where unsensed data could be estimated through inference. However, sparse crowdsensing typically suffers from poor performance during the data completion stage due to various challenges: the sparsity of the sensed data, reliance on numerous timeslots, and uncertain spatiotemporal connections. To resolve such few-shot issues, the proposed solution uses the Correlated Data Fusion for Matrix Completion (CDFMC) approach, which leverages a small amount of objective data to retrain an auxiliary dataset-based pre-trained model that can estimate unsensed data efficiently. CDFMC is trained using a combination of the traditional Deep Matrix Factorization and the Kalman Filtering, which not only enables the efficient representation and comparison of data samples but also fuses the objective data and auxiliary data effectively. Evaluation results show that the proposed CDFMC outperforms baseline techniques, achieving high accuracy in completing unsensed data with minimal training data.
Speaker Mijia Zhang (Jilin University)

Mijia Zhang received his Ph.D. degree in computer system architecture from Jilin University, Changchun, China, in 2023. His current work focuses on sparse Mobile CrowdSensing, Neural Networks, spatiotemporal data inference and matrix completion.


Session Chair

Srinivas Shakkottai (Texas A&M University, USA)

Enter Zoom
Session B-9

B-9: Localization and Tracking

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 1:30 PM — 3:00 PM EDT
Location
Regency B

ATP: Acoustic Tracking and Positioning under Multipath and Doppler Effect

Guanyu Cai and Jiliang Wang (Tsinghua University, China)

0
Acoustic tracking and positioning technologies using microphones and speakers have gained significant interest for applications like virtual reality, augmented reality, and IoT devices. However, existing methods still face challenges in real-world deployment due to multipath interference, Doppler frequency shift, and sampling frequency offset between devices. We propose a versatile Acoustic Tracking and Positioning (ATP) method to address these challenges. First, we propose an iterative sampling frequency offset calibration method. Next, We propose a Doppler frequency shift estimation and compensation model. Finally, we propose a fast adaptive algorithm to reconstruct the line-of-sight (LOS) signal under multipath. We implement ATP in Android and PC and compare it with eight different methods. Evaluation results show that ATP achieves mean accuracy of 0.66 cm, 0.56 cm, and 1.0 cm in tracking, ranging, and positioning tasks. It is 2×, 6×, and 5.8× better than the state-of-the-art methods. ATP advances acoustic sensing for practical applications by providing a robust solution for real-world environments.
Speaker Guanyu Cai (Tsinghua University)



EventBoost: Event-based Acceleration Platform for Real-time Drone Localization and Tracking

Hao Cao, Jingao Xu, Danyang Li and Zheng Yang (Tsinghua University, China); Yunhao Liu (Tsinghua University & The Hong Kong University of Science and Technology, China)

0
Drones have demonstrated their pivotal role in various applications such as search-and-rescue, smart logistics, and industrial inspection, with accurate localization playing an indispensable part. However, in high dynamic range and rapid motion scenarios, traditional visual sensors often face challenges in pose estimation. Event cameras, with their high temporal resolution, present a fresh opportunity for perception in such challenging environments. Current efforts resort to event-visual fusion to enhance the drone's sensing capability. Yet, the lack of efficient event-visual fusion algorithms and corresponding acceleration hardware causes the potential of event cameras to remain underutilized. In this paper, we introduce EventBoost, an acceleration platform designed for drone-based applications with event-image fusion. We propose a suit of novel algorithms through software-hardware co-design on Zynq SoC, aimed at enhancing real-time localization precision and speed. EventBoost achieves enhanced visual fusion precision and markedly elevated processing efficiency. The performance comparison with two state-of-the-art systems shows EventBoost achieves a 24% im- provement in accuracy 24.33% with a 30 ms latency on resource- constrained platforms. We further substantiate EventBoost's exemplary performance through real-world application cases.
Speaker Hao Cao (Tsinghua University)

Hao Cao is a Ph.D. candidate in the School of Software at Tsinghua University, Beijing, China. He received his B.E. degree from the College of Intelligence and Computing, Tianjin University, in 2019. His research interests lie in the Internet of Things and Mobile Computing.


BLE Location Tracking Attacks by Exploiting Frequency Synthesizer Imperfection

Yeming Li, Hailong Lin, Jiamei Lv, Yi Gao and Wei Dong (Zhejiang University, China)

0
In recent years, Bluetooth Low Energy (BLE) has become one of the most wildly used wireless protocols and it is common that users carry one or more BLE devices. With the extensive deployment of BLE devices, there is a significant privacy risk if these BLE devices can be tracked. However, the common wisdom suggests that the risk of BLE location tracking is negligible. The reason is that researchers believe there are no stable BLE fingerprints that are stable across different scenarios (e.g., temperatures) for different BLE devices with the same model. In this paper, we introduce a novel physical-layer fingerprint named Transient Dynamic Fingerprint (TDF), which originated from the negative feedback control process of the frequency synthesizer. Because of the hardware imperfection, the dynamic features of the frequency synthesizer are different, making TDF unique among different devices, even with the same model. Furthermore, TDF keeps stable under different thermal conditions. Based on TDF, we propose BTrack, a practical BLE device tracking system and evaluate its tracking performance in different environments. The results show BTrack works well once BLE beacons are effectively received. The identification accuracy is 35.38%-57.41% higher than the existing method, and stable over temperatures, distances, and locations.
Speaker Yeming Li (Zhejiang University)

Currently a Ph.D candidate in College of Computer Science, Zhejiang University, Hangzhou, China. He received his bachelor's degree from the Zhejiang University of Technology, Hangzhou, China. His research intreset is IoT, wireless communication, and BLE.


ORAN-Sense: Localizing Non-cooperative Transmitters with Spectrum Sensing and 5G O-RAN

Yago Lizarribar (IMDEA Networks, Spain); Roberto Calvo-Palomino (Universidad Rey Juan Carlos, Spain); Alessio Scalingi (IMDEA Networks, Spain); Giuseppe Santaromita (IMDEA Networks Institute, Spain); Gérôme Bovet (Armasuisse, Switzerland); Domenico Giustiniano (IMDEA Networks Institute, Spain)

0
Crowdsensing networks for the sole purpose of performing spectrum measurements have resulted in prior initiatives that have failed primarily due to their costs for maintenance. In this paper, we take a different view and propose ORAN-Sense, a novel architecture of \ac{iot} spectrum crowd-sensing devices integrated into the Next Generation of cellular networks. We use this framework to extend the capabilities of 5G networks and localize a transmitter that does not collaborate in the process of positioning. While 5G signals can not be applied to this scenario as the transmitter does not participate in the localization process through dedicated pilot symbols and data, we show how to use Time Difference of Arrival-based positioning using low-cost spectrum sensors, minimizing hardware impairments of low-cost spectrum receivers, introducing methods to address errors caused by over-the-air signal propagation, and proposing a low-cost synchronization technique. We have deployed our localization network in two major cities in Europe. Our experimental results indicate that signal localization of non-collaborative transmitters is feasible even using low-cost radio receivers with median accuracies of tens of meters with just a few sensors spanning cities, which makes it suitable for its integration in the Next Generation of cellular networks.
Speaker
Speaker biography is not available.

Session Chair

Jin Nakazato (The University of Tokyo, Japan)

Enter Zoom
Session C-9

C-9: Internet of Things (IoT) Networks

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 1:30 PM — 3:00 PM EDT
Location
Regency C

DTMM: Deploying TinyML Models on Extremely Weak IoT Devices with Pruning

Lixiang Han, Zhen Xiao and Zhenjiang Li (City University of Hong Kong, Hong Kong)

0
DTMM is a library designed for efficient deployment and execution of machine learning models on weak IoT devices such as microcontroller units (MCUs). The motivation for designing DTMM comes from the emerging field of tiny machine learning (TinyML), which explores extending the reach of machine learning to many low-end IoT devices to achieve ubiquitous intelligence. Due to the weak capability of embedded devices, it is necessary to compress models by pruning enough weights before deploying. Although pruning has been studied extensively on many computing platforms, two key issues with pruning methods are exacerbated on MCUs: models need to be deeply compressed without significantly compromising accuracy, and they should perform efficiently after pruning. Current solutions only achieve one of these objectives, but not both. In this paper, we find that pruned models have great potential for efficient deployment and execution on MCUs. Therefore, we propose DTMM with pruning unit selection, pre-execution pruning optimizations, runtime acceleration, and post-execution low-cost storage to fill the gap for efficient deployment and execution of pruned models. It can be integrated into commercial ML frameworks for practical deployment, and a prototype system has been developed. Extensive experiments on various models show promising gains compared to state-of-the-art methods.
Speaker
Speaker biography is not available.

Memory-Efficient and Secure DNN Inference on TrustZone-enabled Consumer IoT Devices

Xueshuo Xie (Haihe Lab of ITAI, China); Haoxu Wang, Zhaolong Jian and Li Tao (Nankai University, China); Wei Wang (Beijing Jiaotong University, China); Zhiwei Xu (Haihe Lab of ITAI); Guiling Wang (New Jersey Institute of Technology, USA)

0
Edge intelligence enables resource-demanding DNN inference without transferring original data, addressing concerns about data privacy in consumer IoT devices. For privacy-sensitive applications, deploying models in hardware-isolated trusted execution environments (TEEs) becomes essential. However, the limited secure memory in TEEs poses challenges for deploying DNN inference, and alternative techniques like model partitioning and offloading introduce performance degradation and security issues. In this paper, we present a novel approach for advanced model deployment in TrustZone that ensures comprehensive privacy preservation during model inference. We design a memory-efficient management methods to support memory-demanding inference in TEEs. By adjusting the memory priority, we effectively mitigate memory leakage risks and memory overlap conflicts, resulting in just 32 lines of code alterations in the trusted operating system. Additionally, we leverage two tiny libraries: S-Tinylib (2,538 LoCs), a tiny deep learning library, and Tinylibm (827 LoCs), a tiny math library, to support efficient inference in TEEs. We implemented a prototype on Raspberry Pi 3B+ and evaluated it using three well-known lightweight DNN models. The experimental results demonstrate that our design significantly improves inference speed by 3.13 times and reduces power consumption by over 66.5% compared to non-memory optimization methods in TEEs.
Speaker Haoxu Wang (Nankai University)

Haoxu Wang received his B.S. degree in computer science and technology from Shandong University in 2021. He is currently working toward his MA.SC degree in College of Computer Science, Nankai University. His main research interests include Trusted Execution Environment, Internet of Things, machine learning, and edge computing.


VisFlow: Adaptive Content-Aware Video Analytics on Collaborative Cameras

Yuting Yan, Sheng Zhang, Xiaokun Wang, Ning Chen and Yu Chen (Nanjing University, China); Yu Liang (Nanjing Normal University, China); Mingjun Xiao (University of Science and Technology of China, China); Sanglu Lu (Nanjing University, China)

0
There is an increasing demand for the query of live surveillance video streams via large-scale camera networks, such as for applications in public safety and intelligent cities. To deal with the conflict between computing-intensive detection models and limited resources on cameras, a detection-with-tracking framework has gained prominence. Nevertheless, due to the susceptibility of trackers to occlusions and new object appearances, frequent detections are required to calibrate the results, leading to varying detection demands dependent on video content. Consequently, we propose a mechanism for content-aware analytics on collaborative cameras, denoted as VisFlow, to increase the quality of detections and achieve the latency requirement by fully utilizing camera resources. We formulate such a problem as a non-linear, integer program in long-term scope, to maximize the detection accuracy. An online mechanism, underpinned by a queue-based algorithm and randomized rounding, is then devised to dynamically orchestrate detection workloads among cameras, thus adapting to fluctuating detection demands. Through rigorous proof, both dynamic regret regarding overall accuracy, and the transmission budget are ensured in the long run. The testbed experiments on Jetson Kits demonstrate that VisFlow improves accuracy by 18.3% over the alternatives.
Speaker
Speaker biography is not available.

SAMBA: Detecting SSL/TLS API Misuses in IoT Binary Applications

Kaizheng Liu, Ming Yang and Zhen Ling (Southeast University, China); Yuan Zhang (Fudan University, China); Chongqing Lei (Southeast University, China); Lan Luo (Anhui University of Technology, China); Xinwen Fu (University of Massachusetts Lowell, USA)

0
IoT devices are increasingly adopting Secure Socket Layer (SSL) and Transport Layer Security (TLS) protocols. However, the misuse of SSL/TLS libraries still threatens the communication. Existing tools for detecting SSL/TLS API misuses primarily rely on source code analysis while IoT applications are usually released as binaries with no source code. This paper presents SAMBA, a novel tool to automatically detect SSL/TLS API misuses in IoT binaries through static analysis. To overcome the path explosion problem and deal with various SSL/TLS implementations, we introduce a three-level reduction method to construct the SSL/TLS API-centric graph (SAG), which has a much smaller size compared with the conventional interprocedural control flow graph. We propose a formal expression of API misuse signatures, which is capable of capturing different types of misuse, particularly those in the SSL/TLS connection establishment process. We successfully analyze 115 IoT binaries and find that 94 of them have the vulnerability of insecure certificate verification and 112 support deprecated SSL/TLS protocols. SAMBA is the first IoT binary analysis system for detecting SSL/TLS API misuses.
Speaker Kaizheng Liu (Southeast University)



Session Chair

Zhangyu Guan (University at Buffalo, USA)

Enter Zoom
Session D-9

D-9: RFID and Wireless Charging

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 1:30 PM — 3:00 PM EDT
Location
Regency D

RF-Boundary: RFID-Based Virtual Boundary

Xiaoyu Li and Jia Liu (Nanjing University, China); Xuan Liu (Hunan University, China); Yanyan Wang (Hohai University, China); Shigeng Zhang (Central South University, China); Baoliu Ye and Lijun Chen (Nanjing University, China)

0
A boundary is a physical or virtual line that marks the edge or limit of a specific region, which has been widely used in many applications, such as autonomous driving, virtual wall, and robotic lawn mowers. However, none of existing work can well balance the cost, the deployability, and the scalability of a boundary. In this paper, we propose a new RFID-based boundary scheme together with its detection algorithm called RF-Boundary, which has the competitive advantages of being battery-free, low-cost, and easy-to-maintain. We develop two technologies of phase gradient and dual-antenna DOA to address the key challenges posed by RF-boundary, in terms of lack of calibration information and multi-edge interference. We implement a prototype of RF-Boundary with commercial RFID systems and a mobile robot. Extensive experiments verify the feasibility as well as the good performance of RF-Boundary.
Speaker Xiaoyu Li (Nanjing University)

Xiaoyu Li is currently a PhD. student at Nanjing University. He received his B.E. degree from Huazhong University of Science and Technology (HUST). His research interests include wireless sensing and communication.


Safety Guaranteed Power-Delivered-to-Load Maximization for Magnetic Wireless Power Transfer

Wangqiu Zhou, Xinyu Wang, Hao Zhou and ShenYao Jiang (University of Science and Technology of China, China); Zhi Liu (The University of Electro-Communications, Japan); Yusheng Ji (National Institute of Informatics, Japan)

0
Electromagnetic radiation (EMR) safety has always been a critical reason for hindering the development of magnetic-enabled wireless power transfer technology. People focus on the actual received energy at charging devices while paying attention to their health. Thus, we study this significant problem in this paper, and propose a universal safety guaranteed power-delivered-to-load (PDL) maximization scheme (called SafeGuard). Technically, we first utilize the off-the-shelf electromagnetic simulator to perform the EMR distribution analysis to ensure the universality of the method. Then, we innovatively introduce the concept of multiple importance sampling for achieving efficient EMR safety constraint extraction. Finally, we treat the proposed optimization problem as an optimal boundary point search problem from the perspective of space geometry, and devise a brand-new grid-based multi-constraint parallel processing algorithm to efficiently solve it. We implement a system prototype for SafeGuard, and conduct extensive experiments to evaluate it. The results indicate that our SafeGuard can obviously improve the achieved PDL by up to 1.75× compared with the state-of-the-art baseline while guaranteeing EMR safety. Furthermore, SafeGuard can accelerate the solution process by 29.12× compared with the traditional numerical method to satisfy the fast optimization requirement of wireless charging systems.
Speaker Junaid Ahmed Khan
Junaid Ahmed Khan is a PACCAR endowed Assistant Professor in Electrical and Computer Engineering at Western Washington University. Previously, he has been a research associate at the Center for Urban Science and Progress (CUSP) and Connected Cities with Smart Transportation (C2SMART) center at New York University from September 2019 to September 2020. He was also a research fellow at the DRONES and Smart City research clusters at the FedEx Institute of Technology, University of Memphis from January 2018 to August 2019. He has worked as a senior researcher at Inria Agora team, CITI lab at the National Institute of Applied Sciences (INSA), Lyon, France from October 2016 to January 2018. He has a Ph.D in Computer Science from Université Paris-Est, Marne-la-Vallée, France in November 2016. His research interests are Cyber Physical Systems with emphasis on Connected Autonomous Vehicles (CAVs) and Internet of Things (IoTs).

Dynamic Power Distribution Controlling for Directional Chargers

Yuzhuo Ma, Dié Wu and Jing Gao (Sichuan Normal University, China); Wen Sun (Northwestern Polytechnical University, China); Jilin Yang and Tang Liu (Sichuan Normal University, China)

0
Recently, deploying static chargers to construct timely and robust Wireless Rechargeable Sensor Networks (WRSNs) has become an important research issue for solving the limited energy problem of wireless sensor networks. However, the established fixed power distribution lacks flexibility in response to dynamic charging requests from sensors and may render some sensors to be continuously impacted by destructive wave interference. This results in a gap between energy supply and practical demand, making the charging process less efficient. In this paper, we focus on the real-time sensor charging requests and formulate a dynamic power dis\underline{T}ribut\underline{I}on controlling for \underline{D}irectional charg\underline{E}rs (TIDE) problem to maximize the overall charging utility. To solve the problem, we first build a charging model for directional chargers while considering wave interference and extract the candidate charging orientations from the continuous search space. Then we propose the neighbor set division method to narrow the scope of calculation. Finally, we design a dynamic power distribution controlling algorithm to update the neighbor sets timely and select optimal orientations for chargers. Our experimental results demonstrate the effectiveness and efficiency of the proposed scheme, it outperforms the comparison algorithms by 142.62\% on average.
Speaker Yuzhuo Ma (Sichuan Normal University)

Yuzhuo Ma received the BS degree in mechanical engineering from Soochow University, Suzhou, China, in 2019. She is studying towards the MS degree in the College of Computer Science, Sichuan Normal University. Her research interests focus on wireless rechargeable sensor network. 


LoMu: Enable Long-Range Multi-Target Backscatter Sensing for Low-Cost Tags

Yihao Liu, Jinyan Jiang and Jiliang Wang (Tsinghua University, China)

0
Backscatter sensing has shown great potential in the Internet of Things (IoT) and has attracted substantial research interest. We present LoMu, the first long-range multi-target backscatter sensing system for low-cost tags under ambient LoRa. LoMu analyzes the received low-SNR backscatter signals from different tags and calculates their phases to derive the motion information. The design of LoMu faces practical challenges including near-far interference between multiple tags, phase offsets induced by unsynchronized transceivers, and phase errors due to frequency drift in low-cost tags. We propose a conjugate-based energy concentration method to extract high-quality signals and a Hamming-window-based method to alleviate the near-far problem. We then leverage the relationship between the excitation signal and backscatter signals to synchronize TX and RX. Finally, we combine the double sidebands of backscatter signals to cancel the tag frequency drift. We implement LoMu and conduct extensive experiments to evaluate its performance. The results demonstrate that LoMu can accurately sense 35 tags at the same time. The average frequency sensing error is 0.7% at 400m, which is 4x distance of the state-of-the-art.
Speaker Yihao Liu (Tsinghua University)

Yihao Liu received the BE degree from the School of Software, Tsinghua University, China, in 2023. He is currently working toward the Master's degree in the School of Software, at Tsinghua University. His research interests include low-power wide-area networks, wireless sensing, and the Internet of Things.


Session Chair

Filip Maksimovic (Inria, France)

Enter Zoom
Session E-9

E-9: Machine Learning 3

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 1:30 PM — 3:00 PM EDT
Location
Regency E

Parm: Efficient Training of Large Sparsely-Activated Models with Dedicated Schedules

Xinglin Pan (Hong Kong Baptist University, Hong Kong); Wenxiang Lin and Shaohuai Shi (Harbin Institute of Technology, Shenzhen, China); Xiaowen Chu (The Hong Kong University of Science and Technology (Guangzhou) & The Hong Kong University of Science and Technology, Hong Kong); Weinong Sun (The Hong Kong University of Science and Technology, Hong Kong); Bo Li (Hong Kong University of Science and Technology, Hong Kong)

0
Sparsely-activated Mixture-of-Expert (MoE) layers have found practical applications in enlarging the model size of large-scale foundation models, with only a sub-linear increase in computation demands. Despite the wide adoption of hybrid parallel paradigms like model parallelism, expert parallelism, and expert-sharding parallelism (i.e., MP+EP+ESP) to support MoE model training on GPU clusters, the training efficiency is hindered by communication costs introduced by these parallel paradigms. To address this limitation, we propose Parm, a system that accelerates MP+EP+ESP training by designing two dedicated schedules for placing communication tasks. The proposed schedules eliminate redundant computations and communications and enable overlaps between intra-node and inter-node communications, ultimately reducing the overall training time. As the two schedules are not mutually exclusive, we provide comprehensive theoretical analyses and derive an automatic and accurate solution to determine which schedule should be applied in different scenarios. Experimental results on an 8-GPU server and a 32-GPU cluster demonstrate that Parm outperforms the state-of-the-art MoE training system, DeepSpeed-MoE, achieving 1.13x-5.77x speedup on 1296 manually configured MoE layers and approximately 3x improvement on two real-world MoE models based on BERT and GPT-2.
Speaker
Speaker biography is not available.

Predicting Multi-Scale Information Diffusion via Minimal Substitution Neural Networks

Ranran Wang (University of Electronic Science and Technology of China, China); Yin Zhang (University of Electronic Science and Technology, China); Wenchao Wan and Xiong Li (University of Electronic Science and Technology of China, China); Min Chen (Huazhong University of Science and Technology, China)

0
Information diffusion prediction is a complex task due to the numerous variables present in large social platforms like Weibo and Twitter. While many researchers have focused on the internal influence of individual cascades, they often overlook other influential factors that affect information diffusion. These factors include competition and cooperation among information, the attractiveness of information to users, and the potential impact of content anticipation on further diffusion. Traditional methods relying on individual information modeling struggle to consider these aspects comprehensively. To address the above issues, we propose a multi-scale information diffusion prediction method with a minimal substitution neural network, called MIDPMS. Specifically, to simultaneously enable macro-scale popularity prediction and micro-scale diffusion prediction, we model information diffusion as a substitution process among different information sources. Furthermore, considering the life cycle of content, user preferences, and potential content anticipation, we introduce minimal substitution theory and design a minimal substitution neural network to model this substitution system and facilitate joint training of macroscopic and microscopic diffusion prediction. The extensive experiments on Weibo and Twitter datasets demonstrate MIDPMS significantly outperforms the state-of-art methods over two datasets on both multi-scale tasks.
Speaker Ranran Wang (University of Electronic Science and Technology of China, China)

Ranran Wang is currently a PhD candidate of the School of Information and Communication Engineering, University of Electronic Science and Technology of China. Her main research interests include edge intelligence, cognitive wireless communications, graph learning.


Online Resource Allocation for Edge Intelligence with Colocated Model Retraining and Inference

Huaiguang Cai (Sun Yat-Sen University, China); Zhi Zhou (Sun Yat-sen University, China); Qianyi Huang (Sun Yat-Sen University, China & Peng Cheng Laboratory, China)

0
Due to several kinds of drift, the traditional computing paradigm of deploying a trained model and then performing inference has not been able to meet the accuracy requirements. Accordingly, a new computing paradigm that, retraining the model and performing inference simultaneously on new data after the model is deployed, emerged (we call it model inference and retraining co-location). The key challenge is how to allocate computing resources for model retraining and inference to improve long-term accuracy, especially when computing resources are changing dynamically.
We address this challenge by modeling the relationship between model performance and different retraining and inference configurations first and then propose a linear complexity online algorithm (named \ouralg).
\ouralg solves the original non-convex, integer, time-coupled problem approximately by adjusting the proportion between model retraining and inference according to available real-time computing resources. The competitive ratio of \ouralg is strictly better than the tight competitive ratio of the Inference-Only algorithm (corresponding to the traditional computing paradigm) when data drift occurs for a sufficiently lengthy time, implying the advantages and applications of model inference and retraining co-location paradigm. In particular, \ouralg translates to several heuristic algorithms in different environments. Experiments based on real scenarios confirm the effectiveness of \ouralg.
Speaker
Speaker biography is not available.

Tomtit: Hierarchical Federated Fine-Tuning of Giant Models based on Autonomous Synchronization

Tianyu Qi and Yufeng Zhan (Beijing Institute of Technology, China); Peng Li (The University of Aizu, Japan); Yuanqing Xia (Beijing Institute of Technology, China)

0
With the quick evolution of giant models, the paradigm of pre-training models and then fine-tuning them for downstream tasks has become increasingly popular. The adapter has been recognized as an efficient fine-tuning technique and attracts much research attention. However, adapter-based fine-tuning still faces the challenge of lacking sufficient data. Federated fine-tuning has been recently proposed to fill this gap, but existing solutions suffer from a serious scalability issue, and they are inflexible in handling dynamic edge environments. In this paper, we propose Tomtit, a hierarchical federated fine-tuning system that can significantly accelerate fine-tuning and improve the energy efficiency of devices. Via extensive empirical study, we find that model synchronization schemes (i.e., when edges and devices should synchronize their models) play a critical role in federated fine-tuning. The core of Tomtit is a distributed design that allows each edge and device to have a unique synchronization scheme with respect to their heterogeneity in model structure, data distribution and computing capability. Furthermore, we provide a theoretical guarantee about the convergence of Tomtit. Finally, we develop a prototype of Tomtit and evaluate it on a testbed. Experimental results show that it can significantly outperform the state-of-the-art.
Speaker Tianyu Qi (Beijing Institute of Technology, China)

Tianyu Qi, received BS degree from China University of Geosciences, Wuhan, China, in 2021. He is currently pursuing the MS degree in the School of Automation at the Beijing Institute of Technology, Beijing, China. His research interests include federated learning, cloud computing, and machine learning.


Session Chair

Marco Fiore (IMDEA Networks Institute, Spain)

Enter Zoom
Session F-9

F-9: Hashing, Clustering, and Optimization

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 1:30 PM — 3:00 PM EDT
Location
Regency F

IPFS in the Fast Lane: Accelerating Record Storage with Optimistic Provide

Dennis Trautwein (University of Göttingen, Germany & Protocol Labs Inc., USA); Yiluo Wei (Hong Kong University of Science & Technology (GZ), China); Ioannis Psaras (Protocol Labs & University College London, United Kingdom (Great Britain)); Moritz Schubotz (FIZ-Karlsruhe, Germany); Ignacio Castro (Queen Mary University of London, United Kingdom (Great Britain)); Bela Gipp (University of Göttingen, Germany); Gareth Tyson (The Hong Kong University of Science and Technology & Queen Mary University of London, Hong Kong)

1
The centralization of web services has raised concerns about critical single points of failure, such as content hosting, name resolution, and certification. To address these issues, the "Decentralized Web" movement advocates for decentralized alternatives. Distributed Hash Tables (DHTs) have emerged as a key component facilitating this movement as they offer efficient key/value indexing. The InterPlanetary File System (IPFS) exemplifies this approach by leveraging DHTs for data indexing and distribution. A critical finding of previous studies is that PUT performance for record storage is unacceptably slow, sometimes taking minutes to complete and hindering the adoption of delay-intolerant applications. To address this challenge, this research paper presents three significant contributions. First, we present the design of Optimistic Provide, an approach to accelerate DHT PUT operations in Kademlia-based IPFS networks while maintaining full backward compatibility. Second, we implement and deploy the mechanism and see its usage in the de-facto IPFS deployment, Kubo. Third, we evaluate its effectiveness in the IPFS and Filecoin DHTs. We confirm that we enable sub-second record storage from North America and Europe for 90% of PUT operations while reducing networking overhead by over 40% and maintaining record availability.
Speaker Dennis Trautwein (University of Göttingen)

Dennis Trautwein is a PhD candidate at the University of Göttingen working with Prof. Dr. Bela Gipp. Further he is a Research Engineer at IPShipyard who maintains the IPFS, libp2p, and monitoring infrastructure for both projects. He completed his Bachelor’s degree in extraterrestrial Physics and his Master’s degree in solid-state Physics at the CAU in Kiel before diving into topics revolving around decentralization and peer-to-peer networks in general. In his spare time, he enjoys playing the guitar and nature around Lake Constance.


Fast Algorithms for Loop-Free Network Updates using Linear Programming and Local Search

Radu Vintan (EPFL, Switzerland); Harald Raecke (TU Munich, Germany); Stefan Schmid (TU Berlin, Germany)

1
To meet stringent performance requirements, communication networks are becoming increasingly programmable and flexible, supporting fast and frequent adjustments. However, reconfiguring networks in a dependable and transiently consistent manner is known to be algorithmically challenging. This paper revisits the fundamental problem of how to update the routes in a network in a (transiently) loop-free manner, considering both the Strong Loop-Freedom (SLF) and the Relaxed Loop-Freedom (RLF) property.

We present two fast algorithms to solve the SLF and RLF problem variants exactly, to optimality. Our algorithms are based on a parameterized integer linear program which would be intractable to solve directly by a classic solver. Our main technical contribution is a lazy cycle breaking strategy which, by adding constraints lazily, improves performance dramatically, and outperforms the state-of-the-art exact algorithms by an order of magnitude on realistic medium-sized networks. We further explore approximate algorithms and show that while a relaxation approach is relatively slow, with a local search approach short update schedules can be found, outperforming the state-of-the-art heuristics.

On the theoretical front, we also provide an approximation lower bound for the update time of the state-of-the-art algorithm in the literature. We made all our code and implementations publicly available.
Speaker
Speaker biography is not available.

The Reinforcement Cuckoo Filter

Meng Li and Wenqi Luo (Nanjing University, China); Haipeng Dai (Nanjing University, China & State Key Laboratory for Novel Software Technology, China); Huayi Chai (University of Nanjing, China); Rong Gu (Nanjing University, China); Xiaoyu Wang (Soochow University, China); Guihai Chen (Shanghai Jiao Tong University, China)

0
In this paper, we consider the approximate membership testing problem on skewed data traces, in which some hot or popular items repeat frequently. Previous solutions suffer from either high false positive rates or low lookup throughput. To address this problem, we propose a variant of the cuckoo filter, enhanced with a hotness-aware suffix cache. We note that a false positive item must have a matched fingerprint in the cuckoo filter, and propose to reduce false positives by memorizing them, but with their suffixes only. For each false positive item, we apply a linear-congruential-based hash function and then divide the hash value into three parts: the bucket index to be accessed in the cuckoo filter, the fingerprint to be stored in the cuckoo filter, and the suffix to be cached. Combing the three parts, a hot false positive item can be uniquely identified and can be avoided. Our evaluation results indicate that RCF significantly outperforms non-adaptive filters on skewed data traces. Given the same memory size, it achieves a much lower false positive ratio without sacrificing its lookup throughput. Compared with adaptive filters, RCF provides a competitive false positive ratio while offering a considerably higher (30 − 100×) lookup throughput.
Speaker Wenqi Luo (Nanjing University)



Multi-Order Clustering on Dynamic Networks: On Error Accumulation and Its Elimination

Yang Gao and Hongli Zhang (Harbin Institute of Technology, China)

1
Local clustering aims to find a high-quality cluster near a given vertex. Recently, higher-order units are introduced to local clustering, and the underlying information has been verified to be essential. However, original edges are underestimated in these techniques, leading to the degeneration of network information. Moreover, most of the higher-order models are designed for static networks, whereas real-world networks are generally large and evolve rapidly. Repeatedly conducting a static algorithm at each snapshot is usually computationally impractical, and recent approaches instead track a cluster by updating the cluster sequentially. However, errors would accumulate over lengthy evolutions, and the complete cluster needs to be recalculated periodically to maintain the accuracy, which naturally affects the efficiency. To bridge the two gaps, we design a multi-order hypergraph, and present a hybrid model for dynamic clustering. In particular, we propose an incremental method to track a personalized PageRank vector in the evolving hypergraph, which converges to the exact solution at each snapshot when significantly reducing the complexity. We further develop a dynamic sweep to identify a cut in each vector, whereby a cluster can be incrementally updated with no accumulated errors. We provide rigorous theoretical basis and conduct comprehensive experiments, which demonstrate the effectiveness.
Speaker Yang Gao (Harbin Institute of Technology)

Yang Gao received the B.S. degree in mathematics from Jilin University, Changchun, China, in 2009, and the Ph.D. degree in computer science from Harbin Institute of Technology, Harbin, China, in 2019. Currently, he is an assistant professor with School of Cyberspace Science, Harbin Institute of Technology. His research interests include network and information security, and graph theory.


Session Chair

Mario Pickavet (Ghent University - imec, Belgium)

Enter Zoom
Session G-9

G-9: Modeling and Optimization

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 1:30 PM — 3:00 PM EDT
Location
Prince of Wales/Oxford

AnalyticalDF: Analytical Model for Blocking Probabilities Considering Spectrum Defragmentation in Spectrally-Spatially Elastic Optical Networks

Imran Ahmed and Roshan Kumar Rai (South Asian University, India); Eiji Oki (Kyoto University, Japan); Bijoy Chand Chatterjee (South Asian University, India)

0
Recently, multi-core and multi-mode fibres (MCMMFs) have been considered to overcome physical limitations and increase transport capacity. They are combined with elastic optical networks (EONs) to form spectrally-spatially elastic optical networks (SS-EONs), an emerging technology. Fragmentation and crosstalk (XT) are well-known drawbacks of SS-EONs that increase blocking probability; evaluating blocking probability analytically is difficult due to additional constraints. When calculating blocking probabilities in MCMMFs-based SS-EONs, it is unsurprising that all current studies either employ simulation-based techniques or do not consider defragmentation of their analytical models. This paper proposes AnalyticalDF, an exact analytical continuous-time Markov chain model for blocking probabilities in SS-EONs, which considers defragmentation and the XT-avoided approach. AnalyticalDF generates all possible states and transitions while avoiding inter-core and inter-mode XTs for single-class and multi-class requests. Single-class requests utilize the same number of slots, whereas multi-class requests adopt varying numbers of slots to accommodate client needs. We introduce an iterative approximation model for a single-hop link when AnalyticalDF is not tractable due to scalability. We extend the single-hop model for multi-hop networks further. We evaluate AnalyticalDF, the iterative approximate model, and simulation studies for a single-hop link. The numerical results indicate that AnalyticalDF outperforms a non-defragmentation-aware benchmark model.
Speaker
Speaker biography is not available.

Modeling Average False Positive Rates of Recycling Bloom Filters

Kahlil A Dozier, Loqman Salamatian and Dan Rubenstein (Columbia University, USA)

0
Bloom Filters are a space-efficient data structure used for testing membership in a set that errs only in the false positive direction. However, the standard analysis that measures this false positive rate provides a form of worst case bound that is overly conservative for the majority of network applications that utilize Bloom Filters, and reduces accuracy by not taking into account the Bloom Filter state (number of bits set) after each arrival. In this paper, we more accurately characterize the false positive dynamics of Bloom Filters as they are commonly used in networking applications. Network applications often utilize a Bloom Filter that "recycles": it repeatedly fills, empties and fills again. Users of a Recycling Bloom Filter are often best served by the average false positive rates as opposed to the worst case. We efficiently compute the average false positive rate of a recycling Bloom Filter as a Markov model and derive exact expressions for the long-term false positive rate. We apply our model to the standard Bloom Filter and a "two-phase" variant, verify model accuracy with simulations, and find that the previous worst-case formulation leads to a reduction in the efficiency of Bloom Filter when applied in network applications.
Speaker
Speaker biography is not available.

On Ultra-Sharp Queueing Bounds

Florin Ciucu and Sima Mehri (University of Warwick, United Kingdom (Great Britain)); Amr Rizk (University of Duisburg-Essen, Germany)

0
We present a robust method to analyze a broad range of classical queueing models, e.g., the $GI/G/1$ queue with renewal arrivals, an $AR/G/1$ queue with alternating renewals (AR), as a special class of Semi-Markovian processes, and Markovian fluids queues. At the core of the method lies a standard change-of-measure argument to reverse the sign of the \textit{negative} drift in the underlying random walks. Combined with a suitable representation of the overshoot, we obtain exact results in terms of series. Closed-form and computationally fast bounds follow by taking the series' first terms, which are the dominant ones because of the \textit{positive} drift under the new probability measure. The obtained bounds generalize the state-of-the-art class of martingale bounds and can be much sharper by orders of magnitude.
Speaker
Speaker biography is not available.

Optimization of Offloading Policies for Accuracy-Delay Tradeoffs in Hierarchical Inference

Hasan Burhan Beytur, Ahmet Gunhan Aydin, Gustavo de Veciana and Haris Vikalo (The University of Texas at Austin, USA)

0
We consider a hierarchical inference system with multiple clients connected to a server via a shared communication resource. When necessary, clients with low-accuracy machine learning models can offload classification tasks to a server for processing on a high-accuracy model. We propose a distributed online offloading algorithm which maximizes the accuracy subject to a shared resource utilization constraint thus indirectly realizing accuracy-delay tradeoffs possible given an underlying network scheduler. The proposed algorithm, named Lyapunov-EXP4, introduces a loss structure based on Lyapunov-drift minimization techniques to the bandits with expert advice framework. We prove that the algorithm converges to a near-optimal threshold policy on the confidence of the clients' local inference without prior knowledge of the system's statistics and efficiently solves a constrained bandit problem with sublinear regret. We further consider settings where clients may employ multiple thresholds, allowing more aggressive optimization of overall accuracy at a possible loss in fairness. Extensive simulation results on real and synthetic data demonstrate convergence of Lyapunov-EXP4, and show the accuracy-delay-fairness trade-offs achievable in such systems.
Speaker Hasan Burhan Beytur (The University of Texas at Austin)



Session Chair

Ningning Ding (Northwestern University, USA)

Enter Zoom
Session Lunch-3

Conference Lunch (for Registered Attendees)

Conference
12:00 PM — 1:30 PM PDT
Local
May 23 Thu, 3:00 PM — 4:30 PM EDT
Location
Georgia Ballroom and Plaza Ballroom (2nd Floor)

Enter Zoom
Session A-10

A-10: RF and Physical Layer

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 4:30 PM — 6:00 PM EDT
Location
Regency A

Cross-Shaped Separated Spatial-Temporal UNet Transformer for Accurate Channel Prediction

Hua Kang (Noah's Ark Lab, Huawei, Hong Kong); Qingyong Hu (Hong Kong University of Science and Technology, Hong Kong); Huangxun Chen (Hong Kong University of Science and Technology (Guangzhou), China); Qianyi Huang (Sun Yat-Sen University, China & Peng Cheng Laboratory, China); Qian Zhang (Hong Kong University of Science and Technology, Hong Kong); Min Cheng (Noah's Ark Lab, Huawei, Hong Kong)

0
Accurate channel estimation is crucial for the performance gains of massive multiple-input multiple-output (mMIMO) technologies. However, it is bandwidth-unfriendly to estimate large channel matrix frequently to combat the time-varying wireless channel. Deep learning-based channel prediction has emerged to exploit the temporal relationships between historical and future channels to address the bandwidth-accuracy trade-off. Existing methods with convolutional or recurrent neural networks suffer from their intrinsic limitations, including restricted receptive fields and propagation errors. Therefore, we propose a Transformer-based model, CS3T-UNet tailored for mMIMO channel prediction. Specifically, we combine the cross-shaped spatial attention with a group-wise temporal attention scheme to capture the dependencies across spatial and temporal domains, respectively, and introduce the shortcut paths to well-aggregate multi-resolution representations.
Thus, CS3T-UNet can globally capture the complex spatial-temporal relationship and predict multiple steps in parallel, which can meet the requirement of channel coherence time. Extensive experiments demonstrate that the prediction performance of CS3T-UNet surpasses the best baseline by at most 6.86 dB with a smaller computation cost on two channel conditions.
Speaker Hua KANG (Noah's Ark Lab, Huawei)

I graduated from HKUST in August, 2023 and am currently a researcher at Noah's Ark Lab, Huawei in Hong Kong. 

I'm actively working on topics at the intersection of IoT sensing, wireless communication and deep learning, with a focus on building ubiquitous, privacy-friendly and efficient machine learning systems for IoT applications. 


Diff-ADF: Differential Adjacent-dual-frame Radio Frequency Fingerprinting for LoRa Devices

Wei He, Wenjia Wu, Xiaolin Gu and Zichao Chen (Southeast University, China)

0
Nowadays, LoRa radio frequency fingerprinting has gained widespread attention due to its lightweight nature and difficulty in being forged. The existing fingerprint extraction methods are mainly divided into two categories: deep learning-based methods and feature engineering-based methods. Deep learning-based methods have poor robustness and require significant resource costs for model training. Although feature engineering-based methods can overcome these drawbacks, the feature it commonly uses, such as carrier frequency offset (CFO) and phase noise, lack sufficient discriminative power. Therefore, it is very challenging to design a radio frequency fingerprinting solution with high-accuracy and stable identification performance. Fortunately, we find that the differential phase noise between adjacent dual frames possesses excellent discriminative power and stability. Then, we design the corresponding radio frequency fingerprinting solution called Diff-ADF, which utilizes a classifier with differential phase noise as the primary feature, complemented by the use of CFO as an auxiliary feature. Finally, we implement the Diff-ADF and conduct experiments in real environments. Experimental results demonstrate that our proposed solution achieves an accuracy of over 90% on training and test data collected from different days, which is significantly superior to deep learning-based methods. Even in non-line-of-sight environments, our identification accuracy can still reach close to 85%.
Speaker Wei He (Southeast University)

Graduate student, School Of Cyber Science and Engineering, Southeast University


Cross-domain, Scalable, and Interpretable RF Device Fingerprinting

Tianya Zhao and Xuyu Wang (Florida International University, USA); Shiwen Mao (Auburn University, USA)

0
In this paper, we propose a cross-domain, scalable, and interpretable radio frequency (RF) fingerprinting system using a modified prototypical network (PTN) and an explanation-guided data augmentation across various domains and datasets with only a few samples. Specifically, a convolutional neural network is employed as the feature extractor of the PTN to extract RF fingerprint features. The predictions are made by comparing the similarity between prototypes and feature embedding vectors. To further improve the system performance, we design a customized loss function and deploy an eXplainable Artificial Intelligence (XAI) method to guide data augmentation during fine-tuning. To evaluate the effectiveness of our system in addressing domain shift and scalability problems, we conducted extensive experiments in both cross-domain and novel-device scenarios. Our study shows that our approach achieves exceptional performance in the cross-domain case, exhibiting an accuracy improvement of approximately 80\% compared to convolutional neural networks in the best case. Furthermore, our approach demonstrates promising results in the novel-device case across different datasets. Our customized loss function and XAI-guided data augmentation can further improve authentication accuracy to a certain degree.
Speaker Tianya Zhao (Florida International University)

Tianya Zhao is a second-year Ph.D. student studying computer science at FIU, supervised by Dr. Xuyu Wang. Prior to this, he received his Master's degree from Carnegie Mellon University and Bachelor's degree from Hunan University. In his current Ph.D. program, he is focusing on AIoT, AI Security, Wireless Sensing, and Smart Health.


PRISM: Pre-training RF Signals in Sparsity-aware Masked Autoencoders

Liang Fang, Ruiyuan Song, Zhi Lu, Dongheng Zhang, Yang Hu, Qibin Sun and Yan Chen (University of Science and Technology of China, China)

0
This paper introduces a novel paradigm for learning-based RF sensing, termed Pre-training RF signals In Sparsity-aware Masked autoencoders (PRISM), which shifts the RF sensing paradigm from supervised training on limited annotated datasets to unsupervised pre-training on large-scale unannotated datasets, followed by fine-tuning with a small annotated dataset. PRISM leverages a carefully designed sparsity-aware masking strategy to predict missing contents by masking a portion of RF signals, resulting in an efficient pre-training framework that significantly reduces computation and memory resources. This addresses the major challenges posed by large-scale and high-dimensional RF datasets, where memory consumption and computation speed are critical factors. We demonstrate PRISM's excellent generalization performance across diverse RF sensing tasks by evaluating it on three typical scenarios: human silhouette segmentation, 3D pose estimation, and gesture recognition, involving two general RF devices, radar and WiFi. The experimental results provide strong evidence for the effectiveness of PRISM as a robust learning-based solution for large-scale RF sensing applications.
Speaker
Speaker biography is not available.

Session Chair

Shiwen Mao (Auburn University, USA)

Enter Zoom
Session B-10

B-10: Network Verification and Tomography

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 4:30 PM — 6:00 PM EDT
Location
Regency B

Network Can Help Check Itself: Accelerating SMT-based Network Configuration Verification Using Network Domain Knowledge

Xing Fang (Xiamen University, China); Feiyan Ding (Xiamen, China); Bang Huang, Ziyi Wang, Gao Han, Rulan Yang, Lizhao You and Qiao Xiang (Xiamen University, China); Linghe Kong and Yutong Liu (Shanghai Jiao Tong University, China); Jiwu Shu (Xiamen University, China)

0
Satisfiability Modulo Theories (SMT) based network configuration verification tools are powerful tools in preventing network configuration errors. However, their fundamental limitation is efficiency, because they rely on generic SMT solvers to solve SMT problems, which are in general NP-complete. In this paper, we show that by leveraging network domain knowledge, we can substantially accelerate SMT-based network configuration verification. Our key insights are: given a network configuration verification formula, network domain knowledge can (1) guide the search of solutions to the formula by avoiding unnecessary search spaces; and (2) help simplify the formula, reducing the problem scale. We leverage these insights to design a new SMTbased network configuration verification tool called NetSMT. Extensive evaluation using real-world topologies and synthetic network configurations shows that NetSMT achieves orders of magnitude improvements compared to state-of-the-art methods.
Speaker
Speaker biography is not available.

P4Inv: Inferring Packet Invariants for Verification of Stateful P4 Programs

Delong Zhang, Chong Ye and Fei He (Tsinghua University, China)

0
P4 is widely adopted for programming data planes in software-defined networking. Formal verification of P4 programs is essential to ensure network reliability and security. However, existing P4 verifiers overlook the stateful nature of packet processing, rendering them inadequate for verifying complex stateful P4 programs.

In this paper, we introduce a novel concept called packet invariants to address the stateful aspects of P4 programs. We present an automated verification tool specifically designed for stateful P4 programs. This algorithm efficiently discovers and validates packet invariants in a data-driven manner, offering a novel and effective verification approach for stateful P4 programs. To the best of our knowledge, this approach represents the first attempt to generate and leverage domain-specific invariants for P4 program verification. We implement our approach in a prototype tool called P4Inv. Experimental results demonstrate its effectiveness in verifying stateful P4 programs.
Speaker Delong Zhang (Tsinghua University)

Graduate student of School of Software, Tsinghua University, engaged in the field of formal verification.


Routing-Oblivious Network Tomography with Flow-based Generative Model

Yan Qiao and Xinyu Yuan (Hefei University of Technology, China); Kui Wu (University of Victoria, Canada)

0
Given the high cost associated with directly measuring the traffic matrix (TM), researchers have dedicated decades to devising methods for estimating the complete TM from low-cost link loads by solving a set of heavily ill-posed linear equations. Today's increasingly intricate networks present an even greater challenge: the routing matrix within these equations can no longer be deemed reliable. To address this challenge, we, for the first time, employ a flow-based generative model for TM estimation problem by establishing an invertible correlation between TM and link loads, oblivious of the routing matrix. We demonstrate that the lost information within the ill-posed equations can be independently segregated from the TM. Our model collaboratively learns the invertible correlations between TM and link loads as well as the distribution of the lost information. As a result, our model can unbiasedly reverse-transform the link loads to the true TM. Our model has undergone extensive experiments on two real-world datasets. Surprisingly, even without knowledge of the routing matrix, it significantly outperforms six representative baselines in deterministic and noisy routing scenarios regarding estimation accuracy and distribution similarity. Particularly, if the actual routing matrix is absent, our model can improve the performance of the best baseline by 41%~58%.
Speaker
Speaker biography is not available.

VeriEdge: Verifying and Enforcing Service Level Agreements for Pervasive Edge Computing

Xiaojian Wang and Ruozhou Yu (North Carolina State University, USA); Dejun Yang (Colorado School of Mines, USA); Huayue Gu and Zhouyu Li (North Carolina State University, USA)

0
Edge computing gained popularity for its promises of low latency and high-quality computing services to users. However, it has also introduced the challenge of mutual untrust between user and edge devices for service level agreement (SLA) compliance. This obstacle hampers wide adoption of edge computing, especially in pervasive edge computing (PEC) where edge devices can freely enter or exit the market, which makes verifying and enforcing SLAs significantly more challenging. In this paper, we propose a framework for verifying and enforcing SLAs in PEC, allowing a user to assess SLA compliance of an edge service and ensure correctness of the service results. Our solution, called VeriEdge, employs a verifiable delayed sampling approach to sample a small number of computation steps, and relies on randomly selected verifiers to verify correctness of the computation results. To make sure the verification process is non-manipulable, we employ verifiable random functions to post-select the verifier(s). A dispute protocol is designed to resolve disputes for potential misbehavior. Rigorous security analysis demonstrates that VeriEdge achieves a high probability of detecting SLA violation with a minimal overhead. Experimental results indicate that VeriEdge is lightweight, practical, and efficient.
Speaker Ruozhou Yu, NC State University, USA

Ruozhou Yu is an Assistant Professor in Computer Science from the NC State University, USA. His research interests include edge computing, network security, blockchain, and quantum networks. He is a TPC member and an organizing committee member of INFOCOM 2024. He received the US NSF CAREER Award in 2021.


Session Chair

Kui Wu (University of Victoria, Canada)

Enter Zoom
Session C-10

C-10: Network Security and Privacy

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 4:30 PM — 6:00 PM EDT
Location
Regency C

Utility-Preserving Face Anonymization via Differentially Private Feature Operations

Chengqi Li, Sarah Simionescu, Wenbo He and Sanzheng Qiao (McMaster University, Canada); Nadjia Kara (École de Technologie Supérieure, Canada); Chamseddine Talhi (Ecole de Technologie Superieure, Canada)

0
Facial images play a crucial role in many web and security applications, but their uses come with notable privacy risks. Despite the availability of various face anonymization algorithms, they often fail to withstand advanced attacks while struggling to maintain utility for subsequent applications. We present two novel face anonymization algorithms that utilize feature operations to overcome these limitations. The first algorithm employs high-level feature matching, while the second incorporates additional low-level feature perturbation and regularization. These algorithms significantly enhance the utility of anonymized images while ensuring differential privacy. Additionally, we introduce a task-based benchmark to enable fair and comprehensive evaluations of privacy and utility across different algorithms. Through experiments, we demonstrate that our algorithms outperform others in preserving the utility of anonymized facial images in classification tasks while effectively protecting against a wide range of attacks.
Speaker
Speaker biography is not available.

Toward Accurate Butterfly Counting with Edge Privacy Preserving in Bipartite Networks

Mengyuan Wang, Hongbo Jiang, Peng Peng, Youhuan Li and Wenbin Huang (Hunan University, China)

0
Butterfly counting is widely used to analyze bipartite networks, but counting butterflies in original bipartite networks can reveal sensitive data and pose a risk of individual privacy, specifically edge privacy. Current privacy notions do not fully address the needs of both user-user and user-item bipartite networks. In this paper, we propose a novel privacy notion, edge decentralized differential privacy (edge DDP), which preserves edge privacy in any bipartite network. We also design the randomized edge protocol (REP) to perturb real edges in bipartite networks. However, a significant amount of noise in perturbed bipartite networks often leads to an overcount of butterflies. To achieve accurate butterfly counting, we design the randomized group protocol (RGP) to reduce noise. By combining REP and RGP, we propose a two-phase framework called butterfly counting in limitedly synthesized bipartite networks (BC-LimBN) to synthesize networks for accurate butterfly counting. BC-LimBN has been rigorously proven to satisfy edge DDP. Our experiments on various datasets confirm the high accuracy of BC-LimBN in butterfly counting and its superiority over competitors, with a mean relative error of less than 10\% at most. Furthermore, our experiments show that BC-LimBN has a low time cost, requiring only a few seconds on our datasets.
Speaker
Speaker biography is not available.

Efficient and Effective In-Vehicle Intrusion Detection System using Binarized Convolutional Neural Network

Linxi Zhang (Central Michigan University, USA); Xuke Yan (Oakland University, USA); Di Ma (University of Michigan-Dearborn, USA)

0
Modern vehicles are equipped with multiple Electronic Control Units (ECUs) communicating over in-vehicle networks such as Controller Area Network (CAN). Inherent security limitations in CAN necessitate the use of Intrusion Detection Systems (IDSs) for protection against potential threats. While some IDSs leverage advanced deep learning to improve accuracy, issues such as long processing time and large memory size remain. Existing Binarized Neural Network (BNN)-based IDSs, proposed as a solution for efficiency, often compromise on accuracy. To this end, we introduce a novel Binarized Convolutional Neural Network (BCNN)-based IDS, designed to exploit the temporal and spatial characteristics of CAN messages to achieve both efficiency and detection accuracy. In particular, our approach includes a novel input generator capturing temporal and spatial correlations of messages, aiding model learning and ensuring high-accuracy performance. Experimental results suggest our IDS effectively reduces memory utilization and detection latency while maintaining high detection rates. Our IDS runs 4 times faster and utilizes only 3.3% of the memory space required by a full-precision CNN-based IDS. Meanwhile, our proposed system demonstrates a detection accuracy between 94.19% and 96.82% relative to the CNN-based IDS across different attack scenarios. This performance marks a noteworthy improvement over existing state-of-the-art BNN-based IDS designs.
Speaker
Speaker biography is not available.

5G-WAVE: A Core Network Framework with Decentralized Authorization for Network Slices

Pragya Sharma and Tolga O Atalay (Virginia Tech, USA); Hans-Andrew Gibbs and Dragoslav Stojadinovic (Kryptowire LLC, USA); Angelos Stavrou (Virginia Tech & Kryptowire, USA); Haining Wang (Virginia Tech, USA)

0
5G mobile networks leverage Network Function Virtualization (NFV) to offer services in the form of network slices. Each network slice is a logically isolated fragment constructed by service chaining a set of Virtual Network Functions (VNFs). The Network Repository Function (NRF) acts as a central OpenAuthorization (OAuth) 2.0 server to secure inter-VNF communications resulting in a single point of failure. Thus, we propose 5G-WAVE, a decentralized authorization framework for the 5G core by leveraging the WAVE framework and integrating it into the OpenAirInterface (OAI) 5G core. Our design relies on Side-Car Proxies (SCPs) deployed alongside individual VNFs, allowing point-to-point authorization. Each SCP acts as a WAVE engine to create entities and attestations and verify incoming service requests. We measure the authorization latency overhead for VNF registration, 5G Authentication and Key Agreement (AKA), and data session setup and observe that WAVE verification introduces 155ms overhead to HTTP transactions for decentralizing authorization. Additionally, we evaluate the scalability of 5G-WAVE by instantiating more network slices to observe 1.4x increase in latency with 10x growth in network size. We also discuss how 5G-WAVE can significantly reduce the 5G attack surface without using OAuth 2.0 while addressing several key issues of 5G standardization.
Speaker
Speaker biography is not available.

Session Chair

Rui Zhang (University of Delaware, USA)

Enter Zoom
Session D-10

D-10: High Speed Networking

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 4:30 PM — 6:00 PM EDT
Location
Regency D

Transparent Broadband VPN Gateway: Achieving 0.39 Tbps per Tunnel with Bump-in-the-Wire

Kenji Tanaka (NTT, Japan); Takashi Uchida and Yuki Matsuda (Fixstars, Japan); Yuki Arikawa (NTT, Japan); Shinya Kaji (Fixstars, Japan); Takeshi Sakamoto (NTT, Japan)

0
The demand for virtual private networks (VPNs) that provide confidentiality, integrity, and authenticity of communications is growing every year. IPsec is one of the oldest and most widely used VPN protocols, implemented between the internet protocol (IP) layer and the data link layer of the Linux kernel. This implementation method, known as bump-in-the-stack, has the advantage of being able to transparently apply IPsec to traffic without changing the application. However, its throughput efficiency (Gbps/core) is worse than regular Linux communication. Therefore, we chose the bump-in-the-wire (BITW) architecture, which handles IPsec in hardware separate from the host. Our proposed BITW architecture consists of inline cryptographic accelerators implemented in field-programmable gate arrays and a programmable switch that connects multiple such accelerators. The VPN gateway implemented with our architecture is transparent and improves throughput efficiency by 3.51 times and power efficiency by 3.40 times over a VPN gateway implemented in the Linux kernel. It also demonstrates excellent scalability, and has been confirmed to scale to a maximum of 386.24 Gbps per tunnel, exceeding state-of-the-art technology in maximum throughput and efficiency per tunnel. In multiple tunnels use cases, the proposed architecture improves the energy efficiency by 2.49 times.
Speaker
Speaker biography is not available.

Non-invasive performance prediction of high-speed softwarized network services with limited knowledge

Qiong Liu (Telecom Paris, Institute Polytechnique de Paris, France); Tianzhu Zhang (Nokia Bell Labs, France); Leonardo Linguaglossa (Telecom Paris, France)

0
Modern telco networks have experienced a significant paradigm shift in the past decade, thanks to the proliferation of network softwarization. Despite the benefits of softwarized networks, the constituent software data planes cannot always guarantee predictable performance due to resource contentions in the underlying shared infrastructure. Performance predictions are thus paramount for network operators to fulfilling Service-Level Agreements (SLAs), especially in high-speed regimes (e.g., Gigabit or Terabit Ethernet). Existing solutions heavily rely on in-band feature collection, which imposes non-trivial engineering and data-path overhead.

This paper proposes a non-invasive approach to data-plane performance prediction: our framework complements state-of-the-art solutions by measuring and analyzing low-level features ubiquitously available in the network infrastructure. Accessing these features does not hamper the packet data path. Our approach does not rely on prior knowledge of the input traffic, VNFs' internals, and system details.
We show that (i) low-level hardware features exposed by the NFV infrastructure can be collected and interpreted for performance issues, (ii) predictive models can be derived with classical ML algorithms, (iii) and can be used to predict performance impairments in real NFV systems accurately. Our code and datasets are publicly available.
Speaker
Speaker biography is not available.

BurstDetector: Real-Time and Accurate Across-Period Burst Detection in High-Speed Networks

Zhongyi Cheng, Guoju Gao, He Huang, Yu-e Sun and Yang Du (Soochow University, China); Haibo Wang (University of Kentucky, USA)

0
Traffic measurement provides essential information for various network services. Burst is a common phenomenon in high-speed network streams, which manifests as a surge in the number of a flow's incoming packets. We propose a new definition named across-period burst, considering the change not in two adjacent time windows but in two groups of windows with time continuity. The across-period burst definition can better capture the continuous changes of flows in high-speed networks. To achieve real-time burst detection with high accuracy and low memory consumption, we propose a novel sketch named BurstDetector, which consists of two stages. Stage 1 is to exclude those flows that will not become burst flows, while Stage 2 accurately records the information of the potential burst flows and carries out across-period burst detections at the end of every time window. We further propose an optimization called Hierarchical Cell, which can improve the memory utilization of BurstDetector. In addition, we analyze the estimation accuracy and time complexity of BurstDetector. Extensive experiments based on real-world datasets show that our BurstDetector can achieve at least 2.8 times as much detection accuracy and processing throughput as some existing algorithms.
Speaker
Speaker biography is not available.

NetFEC: In-network FEC Encoding Acceleration for Latency-sensitive Multimedia Applications

Yi Qiao, Han Zhang and Jilong Wang (Tsinghua University, China)

0
In face of packet loss, latency-sensitive multimedia applications cannot afford re-transmission because loss detection and re-transmission could lead to extra latency or otherwise compromised media quality. Alternatively, forward error correction (FEC) ensures reliability by adding redundancy and it is able to achieve lower latency at the cost of bandwidth and computational overheads. We propose to re-locate FEC encoding to hardware that better suits the computational pattern of FEC encoding than CPUs. In this paper, we present NetFEC, an in-network acceleration system that offloads the entire FEC encoding process on the emergent programmable switching ASICs, eliminating all CPU involvement. We design the ghost packet mechanism so that NetFEC can be compatible with important media transport functionalities, including congestion control, pacing and statistics. We integrate NetFEC with WebRTC and conduct extensive experiments with real hardwares. Our evaluations demonstrate that NetFEC is able to relieve server CPU burden and adds negligible overheads.
Speaker Yi Qiao

Yi Qiao received his B.S. degree of Computer Science and Technology from Tsinghua University. He is now a Ph.D. candidate at Institute of Network Science and Cyberspace, Tsinghua University. His research focuses on software-defined networking, network function virtualization and cyber security.


Session Chair

Baochun Li (University of Toronto, Canada)

Enter Zoom
Session E-10

E-10: Machine Learning 4

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 4:30 PM — 6:00 PM EDT
Location
Regency E

Augment Online Linear Optimization with Arbitrarily Bad Machine-Learned Predictions

Dacheng Wen (The University of Hong Kong, Hong Kong); Yupeng Li (Hong Kong Baptist University, Hong Kong); Francis C.M. Lau (The University of Hong Kong, Hong Kong)

0
The online linear optimization paradigm is important to many real-world network applications as well as theoretical algorithmic studies. Recent studies have made attempts to augment online linear optimization with machine-learned predictions of the cost function that are meant to improve the performance of the learner. However, they fail to address the possible realistic case where the predictions can be arbitrarily bad. In this work, we take the first step to study the problem of online linear optimization with a dynamic number of arbitrarily bad machine-learned predictions per round and propose an algorithm termed OLOAP. Our theoretical analysis shows that, when the qualities of the predictions are satisfactory, OLOAP achieves a regret bound of O(log T), which circumvents the tight lower bound of Ω( √ T) for the vanilla problem of online linear optimization (i.e., the one without any predictions). Meanwhile, the regret of our algorithm is never worse than O( √ T) irrespective of the qualities of predictions. In addition, we further derive a lower bound for the regret of the studied problem, which demonstrates that OLOAP is near-optimal. We consider two important network applications and conduct extensive evaluations. Our results validate the superiority of our algorithm over state-of-the-art approaches.
Speaker
Speaker biography is not available.

Dancing with Shackles, Meet the Challenge of Industrial Adaptive Streaming via Offline Reinforcement Learning

Lianchen Jia (Tsinghua University, China); Chao Zhou (Beijing Kuaishou Technology Co., Ltd, China); Tianchi Huang, Chaoyang Li and Lifeng Sun (Tsinghua University, China)

0
Adaptive video streaming has been studied for over 10 years and has demonstrated remarkable performance. However, adaptive video streaming is not an independent algorithm but relies on other components of the video system. Consequently, as other components undergo optimization, the gap between the traditional simulator and the real-world system continues to grow which makes the adaptive video streaming algorithm must adapt to these variations. In order to address the challenges facing industrial adaptive video streaming, we introduce a novel offline reinforcement learning framework called Backwave. This framework leverages history logs to reduce the sim-real gap. We propose new metrics based on counterfactual reasoning to evaluate its performance and we integrate expert knowledge to generate valuable data to mitigate the issue of data override. Furthermore, we employ curriculum learning to minimize additional errors. We deployed Backwave on a mainstream commercial short video platform, Kuaishou. In a series of A/B tests conducted nearly one month with over 400M daily watch times, Backwave consistently outperforms prior algorithms. Specifically, Backwave reduces stall time by 0.45% to 8.52% while maintaining comparable video quality and Backwave demonstrates improvements in average play duration by 0.12% to 0.16%, and overall play duration by 0.12% to 0.26%.
Speaker
Speaker biography is not available.

GraphProxy: Communication-Efficient Federated Graph Learning with Adaptive Proxy

Junyang Wang, Lan Zhang, Junhao Wang, Mu Yuan and Yihang Cheng (University of Science and Technology of China, China); Qian Xu (BestPay Co.,Ltd,China Telecom, China); Bo Yu (Bestpay Co., Ltd, China Telecom, China)

0
Federated graph learning (FGL) enables multiple participants with distributed but connected graph data to collaboratively train a model in a privacy-preserving way. However, the high communication cost hinder the adoption of FGL in many resource-limited or delay-sensitive applications. In this work, we focus on reducing the communication cost incurred by the transmission of neighborhood information in FGL. We propose to search for local proxies that can play a substitute role as the external neighbors, and develop a novel federated graph learning framework named GraphProxy. GraphProxy utilizes representation similarity and class correlation to select local proxies for external neighbors. And we propose to dynamically adjust the proxy strategy according to the changing representation of nodes during the iterative training process. We also perform a theoretical analysis and show that using a proxy node has a similar influence on training when it is sufficiently similar to the external one. Extensive evaluations show the effectiveness of our design, e.g., GraphProxy can achieve 8 times communication efficiency with only 0.14% performance degradation.
Speaker
Speaker biography is not available.

Learning Context-Aware Probabilistic Maximum Coverage Bandits: A Variance-Adaptive Approach

Xutong Liu (The Chinese University of Hong Kong, Hong Kong); Jinhang Zuo (University of Massachusetts Amherst & California Institute of Technology, USA); Junkai Wang (Fudan University, China); Zhiyong Wang (The Chinese University of Hong Kong, Hong Kong); Yuedong Xu (Fudan University, China); John Chi Shing Lui (Chinese University of Hong Kong, Hong Kong)

0
Probabilistic maximum coverage (PMC) is an important framework that can model many network applications, including mobile crowdsensing, content delivery, and task replication. In PMC, an operator chooses nodes in a graph that can probabilistically cover other nodes, aiming to maximize the total rewards from the covered nodes. To tackle the challenge of unknown parameters in network environments, PMC are studied under the online learning context, i.e., the PMC bandit. However, existing PMC bandits lack context-awareness and fail to exploit valuable contextual information, limiting their efficiency and adaptability in dynamic environments. To address this limitation, we propose a novel context-aware PMC bandit model (C-PMC). C-PMC employs a linear structure to model the mean outcome of each arm, efficiently incorporating contextual features and enhancing its applicability to large-scale network systems. Then we design a variance-adaptive contextual combinatorial upper confidence bound algorithm (VAC2UCB), which utilizes second-order statistics, specifically variance, to re-weight feedback data and estimate unknown parameters. Our theoretical analysis shows that C-PMC achieves a regret of \(\tilde{O}(d\sqrt{VT})\), independent of the number of edges E and action size K. Finally, we conduct experiments on synthetic and real-world datasets, showing the superior performance of VAC2UCB in context-aware mobile crowdsensing and user-targeted content delivery applications.
Speaker
Speaker biography is not available.

Session Chair

Walter Willinger (NIKSUN, USA)

Enter Zoom
Session F-10

F-10: Spectrum Access and Sensing

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 4:30 PM — 6:00 PM EDT
Location
Regency F

Effi-Ace: Efficient and Accurate Prediction for High-Resolution Spectrum Tenancy

Rui Zou (North Carolina State University, USA); Wenye Wang (NC State University, USA)

0
Spectrum prediction is a key enabler for the forthcoming coexistence paradigm where various Radio Access Technologies share overlapping radio spectrum, to substantially improve spectrum efficiency in 5G and beyond systems. Though this fundamental issue has received tremendous research attention, existing algorithms are designed for and validated against spectrum usage data in low time-frequency granularities, which cause inevitable errors when applied to spectrum prediction in realistic resolutions. Therefore, in this paper, we improve three key components along the pipeline of spectrum prediction. First, we achieve raw spectrum data in the same resolutions as scheduling, which reflect the actual dynamics of the subject to be predicted. We improve the Deep Q-Network (DQN) prediction algorithm with enhanced experience replay to reduce the sample complexity, so that the improved DQN is more efficient in terms of sample quantities. New prediction features are extracted from high resolution measurement data to improve prediction accuracy. According to our thorough experiments, the proposed prediction algorithm substantially reduces the sample complexity by 88.9%, and the prediction accuracy improvements are up to 14%, when compared with various state-of-the-art counterparts.
Speaker
Speaker biography is not available.

Scalable Network Tomography for Dynamic Spectrum Access

Aadesh Madnaik and Neil C Matson (Georgia Institute of Technology, USA); Karthikeyan Sundaresan (Georgia Tech, USA)

0
Mobile networks have increased spectral efficiency through advanced multiplexing strategies that are coordinated by base stations (BS) in licensed spectrum. However, external interference on clients, leads to significant performance degradation during dynamic (unlicensed) spectrum access (DSA). We introduce the notion of network tomography for DSA, whereby clients are transformed into spectrum sensors, whose joint access statistics are measured and used to account for interfering sources. Albeit promising, performing such tomography naively incurs an impractical overhead that scales exponentially with the multiplexing order of the strategies deployed -- which will only continue to grow with 5G/6G technologies.

To this end, we propose a novel, scalable network tomography framework called NeTo-X that estimates joint client access statistics with just linear overhead, and forms a blue-print of the interference, thus enabling efficient DSA for future networks. NeTo-X's design incorporates intelligent algorithms that leverage multi-channel diversity and the spatial locality of interference impact on clients to accurately estimate the desired interference statistics from just pair-wise measurements of its clients. The merits of its framework are showcased in the context of resource management and jammer localization applications, where its performance significantly outperforms baseline approaches and closely approximates optimal performance at a scalable overhead.
Speaker
Speaker biography is not available.

Stitching the Spectrum: Semantic Spectrum Segmentation with Wideband Signal Stitching

Daniel Uvaydov, Milin Zhang, Clifton P Robinson, Salvatore D'Oro, Tommaso Melodia and Francesco Restuccia (Northeastern University, USA)

0
Spectrum sensing becomes fundamental to enable the coexistence of different wireless technologies in shared spectrum bands. We propose a completely novel approach based on semantic spectrum segmentation, where multiple signals are simultaneously classified and localized in both time and frequency at the I/Q level and by using unprocessed I/Q samples. Conversely from the state-of-the-art computer vision algorithm, we add non-local blocks to combine the spatial features of signals, and thus achieve better performance. In addition, we propose a novel data generation approach where a limited set of easy-to-collect real-world wireless signals are ``stitched together'' to generate large-scale, wideband, and diverse datasets. Experimental results obtained on multiple testbeds over the course of 3 days show that our approach classifies and localizes signals with a mean intersection over union (IOU) of 96.70% across 5 wireless protocols while performing in real-time with a latency of 2.6 ms. Moreover, we demonstrate that our approach based on non-local blocks achieves 7% more accuracy when segmenting the most challenging signals with respect to the state-of-the-art U-Net algorithm.
Speaker
Speaker biography is not available.

VIA: Establishing the link between spectrum sensor capabilities and data analytics performance

Karyn Doke and Blessing Andrew Okoro (University at Albany, USA); Amin Zare (KU Leuven, Belgium); Mariya Zheleva (UAlbany SUNY, USA)

0
Automated spectrum analysis has become an important capability of dynamic spectrum access networks. Outcomes from spectrum analytics will feed into critical decisions such as (i) how to allocate network resources to clients, (ii) when to enforce penalties due to malicious or disruptive activity, and (iii) how to chart policies for future regulations. The insights gleaned from a spectrum trace, however, are as objective as the trace itself, and artifacts that might have been introduced from sensor imperfections or configuration will inevitably propagate as (potentially false) analysis insights. Yet, spectrum analytics have been largely developed in isolation from the underlying data collection and are oblivious to sensor-induced artifacts.

To address this challenge we develop VIA a framework that quantifies spectrum data fidelity based on sensor properties and configuration. VIA takes as an input a spectrum trace and the sensor configuration, and benchmarks data quality along three vectors: (i) Veracity, or how truthfully a scan captures spectrum activity, (ii) Intermittency, characterizing the temporal persistence of spectrum scans and (iii) Ambiguity, encompassing the likelihood of false occupancy detection. We showcase VIA by studying the data fidelity of five common sensor platforms.
Speaker
Speaker biography is not available.

Session Chair

Salvatore D'Oro (Northeastern University, USA)

Enter Zoom
Session G-10

G-10: Edge Networks

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 4:30 PM — 6:00 PM EDT
Location
Prince of Wales/Oxford

Minimizing Latency for Multi-DNN Inference on Resource-Limited CPU-Only Edge Devices

Tao Wang (Tianjin University, China); Tuo Shi (City University of Hong Kong, Hong Kong); Xiulong Liu (Tianjin University, China); Jianping Wang (City University of Hong Kong, Hong Kong); Bin Liu (Tsinghua University, China); Yingshu Li (Georgia State University, USA); Yechao She (City University of Hong Kong, Hong Kong)

0
Despite the rapid development of specialized hardware, a vast number of IoT edge devices remain powered by traditional CPUs. As the number of IoT users expands, performing multiple Deep Neural Networks inference on these resource-constrained, CPU-only edge devices presents significant challenges. Existing solutions such as model compression, hardware acceleration, and model partitioning either compromise inference accuracy, are not applicable due to hardware specificity, or result in inefficient resource utilization. To address these issues, this paper introduces L-PIC (Latency Minimized Parallel Inference on CPU), a framework that is specifically designed to optimize resource allocation, minimize inference latency, and maintain result accuracy on CPU-only edge devices. Comprehensive experiments validate the superior efficiency and effectiveness of the L-PIC framework compared to existing methods.
Speaker Tao Wang (Tianjin University)

Tao Wang received his BE degree in Computer Science and Technology from Ocean University of China, China. He is currently working toward the master’s degree in the College of Intelligence and Computing, Tianjin University, China. His research interests include edge computing and DNN Inference.


M3OFF: Module-Compositional Model-Free Computation Offloading in Multi-Environment MEC

Tao Ren (Institute of Software Chinese Academy of Sciences, China); Zheyuan Hu, Jianwei Niu and Weikun Feng (Beihang University, China); Hang He (Hangzhou Innovation Institute, Beihang University & Beihang University, China)

0
Computation offloading is one of the key issues in mobile edge computing (MEC) that alleviates the tension between user equipment's limited capabilities and mobile application's high requirements. To achieve model-free computation offloading when reliable MEC dynamics are unavailable, deep reinforcement learning (DRL) has become a popular methodology. However, most existing DRL-based offloading approaches are developed for a single MEC environment, with invariant system bandwidth, edge capability, task types, etc., while realistic MEC scenarios tend to be of high diversity. Unfortunately, in multi-MEC environments, DRL-based offloading faces at least two challenges, learning inefficiency and interference of offloading experiences. To address the challenges, we propose a DRL-based Multi-environmental Module-compositional Model-free computation OFFloading (M3OFF) framework. M3OFF generates offloading policies using module composition instead of a single DRL network so that learning efficiency could be improved by reusing the same modules and learning interference could be reduced by composing different modules. Furthermore, we design multiple module composition-specific training methods for M3OFF, including alternate modules-and-composer updates to improve training stability, loss-regularization to avoid module degeneration, and module-dropout to mitigate overfitting. Extensive experimental results on both simulation and testbed demonstrate that M3OFF outperforms state-of-the-art by more than 16.7% in multi-MEC and reaches performances close to single-MEC.
Speaker Zheyuan Hu (Beihang University, China)

Zheyuan Hu received the B.S. degree in computer science and engineering from Northeastern University, Shenyang, China, in 2017, and the M.S. degree in computer science and engineering from Beihang University, Beijing, China, in 2021. He is currently pursuing the Ph.D. degree with the School of Computer Science and Engineering, Beihang University, Beijing, China. His research interests include mobile edge computing and distributed computing system.


On Efficient Zygote Container Planning and Task Scheduling for Edge Native Application Acceleration

Yuepeng Li (China University of Geosciences, China); Lin Gu (Huazhong University of Science and Technology, China); Zhihao Qu (Hohai University, China); Lifeng Tian and Deze Zeng (China University of Geosciences, China)

0
Edge native applications usually consist of several dependent tasks encapsulated in containers and started on-demand in the edge cloud. Unfortunately, the application performance is deeply affected by the notorious cold startup problem of containers. Pre-warming Zygote container pre-imported certain common packages has been proven as an effective startup acceleration solution. Since a Zygote can be shared among co-located tasks that require identical common packages, not only the Zygote planning but also the task scheduling decisions shall be carefully made to maximize the benefit of the Zygotes pre-warmed in limited memory. Additionally, task dependency necessitates co-locating highly dependent tasks on the same server, naturally raising a dilemma in task scheduling. To this end, in this paper, we investigate the problem of how to plan Zygote and schedule tasks for application completion time minimization, which is proved to be NP-hard. We further propose a Priority and Popularity (P&P) based edge native application acceleration algorithm. Both theoretical analysis and extensive experiments demonstrate the effectiveness of our proposed algorithm. The experiment results show that P&P can reduce the application completion time by 11.7%.
Speaker Yuepeng Li (China University of Geosciences, Wuhan)

Yuepeng Li received the B.S. and the M.S. degrees from the School of Computer Science, China University of Geosciences, Wuhan, China, in 2016 and 2019, respectively. His current research interests mainly focus on edge computing, and related technologies like task scheduling, and Trusted Execution Environment.


Optimization for the Metaverse over Mobile Edge Computing with Play to Earn

Chang Liu, Terence Jie Chua and Jun Zhao (Nanyang Technological University, Singapore)

0
The concept of the Metaverse has garnered growing interest from both academic and industry circles. The decentralization of both the integrity and security of digital items has spurred the popularity of play-to-earn (P2E) games, where players are entitled to earn and own digital assets which they may trade for physical-world currencies. However, these computationally-intensive games are hardly playable on resource-limited mobile devices and the computation tasks have to be offloaded to an edge server. Through mobile edge computing (MEC), users can upload data to the Metaverse Service Provider (MSP) edge servers for computing. Nevertheless, there is a trade-off between user-perceived in-game latency and user visual experience. The downlink transmission of lower-resolution videos lowers user-perceived latency while lowering the visual fidelity and consequently, earnings of users. In this paper, we design a method to enhance the Metaverse-based MAR in-game user experience. Specifically, we formulate and solve a multi-objective optimization problem. Given the inherent NP-hardness of the problem, we present a low-complexity algorithm to address it, mitigating the trade-off between delay and earnings. The experiment results show that our method can effectively balance the user-perceived latency and profitability, thus improving the performance of Metaverse-based MAR systems.
Speaker Chang Liu (Nanyang Technological University, Singapore)

Chang Liu is a Ph.D. student at Nanyang Technological University, Singapore. His research interests include edge computing, federated learning and Metaverse.


Session Chair

Junaid Ahmed Khan (Western Washington University, USA)

Enter Zoom
Session Break-3-2

Coffee Break

Conference
3:00 PM — 3:30 PM PDT
Local
May 23 Thu, 6:00 PM — 6:30 PM EDT
Location
Regency Foyer & Hallway

Enter Zoom
Session A-11

A-11: Topics in Wireless and Edge Networks

Conference
3:30 PM — 5:00 PM PDT
Local
May 23 Thu, 6:30 PM — 8:00 PM EDT
Location
Regency A

Talk2Radar: Talking to mmWave Radars via Smartphone Speaker

Kaiyan Cui (Nanjing University of Posts and Telecommunications & The Hong Kong Polytechnic University, China); Leming Shen and Yuanqing Zheng (The Hong Kong Polytechnic University, Hong Kong); Fu Xiao (Nanjing University of Posts and Telecommunications, China); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China)

0
Integrated Sensing and Communication (ISAC) is gaining a tremendous amount of attention from both academia and industry. Recent work has brought communication capability to sensing-oriented mmWave radars, enabling more innovative applications. These solutions, however, either require hardware modifications or suffer from limited data rates. This paper presents Talk2Radar, which builds a faster communication channel between smartphone speakers and mmWave radars, without any hardware modification to either commodity smartphones or off-the-shelf radars. In Talk2Radar, a smartphone speaker sends messages by playing carefully designed sounds. A mmWave radar acting as a data receiver captures the emitted sounds by detecting the sound-induced smartphone vibrations, and then decodes the messages. Talk2Radar characterizes smartphone speakers for speaker-to-mmWave radar communication and addresses a series of technical challenges, including modulation and demodulation of extremely weak sound-induced vibrations, multi-speaker concurrent communication and human motion suppression. We implement and evaluate Talk2Radar in various practical settings. Experimental results show that Talk2Radar can achieve a data rate of up to 400bps with an average BER of less than 5%, outperforming the state-of-the-art by approximately 33x.
Speaker Kaiyan Cui (Nanjing University of Posts and Telecommunications & The Hong Kong Polytechnic University, China)

Kaiyan Cui, an Assistant Professor in the School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing, China. She received the Joint Ph.D. degree from Hong Kong Polytechnic University, Hong Kong, China and Xi'an Jiaotong University, Xi’an, China, in 2023. Her research interests include smart sensing, mobile computing, and IoT.


Distributed Experimental Design Networks

Yuanyuan Li and Lili Su (Northeastern University, USA); Carlee Joe-Wong (Carnegie Mellon University, USA); Edmund Yeh and Stratis Ioannidis (Northeastern University, USA)

0
As edge computing capabilities increase, model learning deployments at a diverse edge environment have emerged. In experimental design networks, introduced recently, network routing and rate allocation is designed to aid the transfer of data from sensors to heterogeneous learners. We design efficient experimental design network algorithms that are (a) distributed and (b) use multicast transmissions. This poses significant challenges as classic decentralization approaches often operate on (strictly) concave objectives under differentiable constraints. In contrast, the problem we study here has a non-convex, continuous DR-submodular objective while multicast transmissions naturally result in non-differentiable constraints From a technical standpoint, we propose a distributed Frank-Wolfe and a distributed projected gradient ascent algorithm that, coupled with a relaxation of non-differentiable constraints, yield allocations within a $1-1/e$ factor from the optimal. Numerical evaluations show that our proposed algorithms outperforms competitors w.r.t. model learning quality.
Speaker
Speaker biography is not available.

Roaming across the European Union in the 5G Era: Performance, Challenges, and Opportunities

Rostand A. K. Fezeu (University of Minnesota, USA); Claudio Fiandrino (IMDEA Networks Institute, Spain); Eman Ramadan, Jason Carpenter, Daqing Chen and Yiling Tan (University of Minnesota - Twin Cities, USA); Feng Qian (University of Minnesota, Twin Cities, USA); Joerg Widmer (IMDEA Networks Institute, Spain); Zhi-Li Zhang (University of Minnesota, USA)

0
Roaming provides users with voice and data connectivity when traveling abroad. This is particularly the case in Europe where the ``Roam like Home'' policy established by the European Union in 2017 has made roaming affordable. Nonetheless, due to various policies employed by operators, roaming can incur considerable performance penalty as shown in past studies of 3G/4G networks. As 5G provides significantly higher bandwidth, how does roaming affect user-perceived performance? We present, to the best of our knowledge, the first comprehensive and comparative measurement study of commercial 5G in four European countries.

Our measurement study is unique in the way it makes it possible to link key 5G mid-band channels and configuration parameters (``policies'') used by various operators in these countries with their effect on the observed 5G performance from the network (in particular, the physical and MAC layer) and applications perspectives. Our measurement study not only portrays the observed quality of experience of users when roaming, but also provides guidance to optimize the network configuration as well as to users and application developers in choosing mobile operators. Moreover, our contribution provides the research community with, to our knowledge, the largest cross-country roaming 5G dataset to stimulate further research.
Speaker
Speaker biography is not available.

Two-Stage Distributionally Robust Edge Node Placement Under Endogenous Demand Uncertainty

Jiaming Cheng (University of British Columbia, Canada); Duong Thuy Anh Nguyen and Duong Tung Nguyen (Arizona State University, USA)

0
Edge computing (EC) promises to deliver low-latency and ubiquitous computation to numerous devices at the network edge. This paper aims to jointly optimize edge node (EN) placement and resource allocation for an EC platform, considering demand uncertainty. Diverging from existing approaches treating uncertainties as exogenous, we propose a novel two-stage decision-dependent distributionally robust optimization (DRO) framework to effectively capture the interdependence between EN placement decisions and uncertain demands. The first stage involves making EN placement decisions, while the second stage optimizes resource allocation after uncertainty revelation. We present an exact mixed-integer linear program reformulation for solving the underlying ``min-max-min" two-stage model. We further introduce a valid inequality method to enhance computational efficiency, especially for large-scale networks. Extensive numerical experiments demonstrate the benefits of considering endogenous uncertainties and the advantages of the proposed model and approach.
Speaker
Speaker biography is not available.

Session Chair

Duong Tung Nguyen (Arizona State University, USA)

Enter Zoom
Session B-11

B-11: Topics in Secure and Reliable Networks

Conference
3:30 PM — 5:00 PM PDT
Local
May 23 Thu, 6:30 PM — 8:00 PM EDT
Location
Regency B

SyPer: Synthesis of Perfectly Resilient Local Fast Rerouting Rules for Highly Dependable Networks

Csaba Györgyi (ELTE Eötvös Loránd University, Hungary); Kim Larsen (CISS, Denmark); Stefan Schmid (TU Berlin, Germany); Jiri Srba (Aalborg University, Denmark)

0
Modern communication networks support local fast re-routing (FRR) to quickly react to link failures. However, configuring such FRR mechanisms is challenging as the rules have to be defined ahead of time, without knowledge of the failures, and can depend only on local decisions made by the nodes incident to a failed link. Designing failover protection against multiple link failures is particularly difficult. We present a novel synthesis approach which addresses this challenge by generating FRR rules in an automated and provably correct manner. Our network model assumes that each node maintains a prioritised list of backup links (a.k.a. skipping forwarding) - an FRR method that allows for a memory-efficient deployment. We study the theoretical properties of the model and implement a synthesis method in our tool SyPer that aims to provide perfect resilience: if there are up to k link failures, we can always route traffic between any two nodes as long as they are still connected in the underlying physical network. To this end, SyPer focuses on the synthesis of efficient forwarding rules using the BDD (binary decision diagram) methodology and our empirical evaluation shows that SyPer is feasible, and can synthesize robust network configuration in realistic settings.
Speaker
Speaker biography is not available.

Reverse Engineering Industrial Protocols Driven By Control Fields

Zhen Qin and Zeyu Yang (Zhejiang University, China); Yangyang Geng (Information Engineering University, China); Xin Che, Tianyi Wang and Hengye Zhu (Zhejiang University, China); Peng Cheng (Zhejiang University & Singapore University of Technology and Design, China); Jiming Chen (Zhejiang University, China)

0
Industrial protocols are widely used in Industrial Control Systems (ICSs) to network physical devices, thus playing a crucial role in securing ICSs. However, most commercial industrial protocols are proprietary and owned by their vendors, which impedes the implementation of protections against cyber threats. In this paper, we design REInPro to Reverse Engineer Industrial Protocols. REInPro is inspired by the fact that the structure of industrial protocols can be determined by a particular field referred to control field. By applying a probabilistic model of network traffic behavior, REInPro automatically identifies the control field and groups the associated network traffic into clusters. REInPro then infers critical semantics of industrial protocols by differentiating the features of corresponding protocol fields. We have experimentally implemented and evaluated REInPro using 8 different industrial protocols across 6 Programmable Logic Controllers (PLCs) belonging to 5 original equipment manufacturers. The experimental results show REInPro to reverse engineer the formats and semantics of industrial protocols with an average correctness/perfection of 0.70/0.58 and 0.96/0.39.
Speaker Zhen Qin (ZheJiang University)

She is a graduate student at Zhejiang University, with her research focused on Industrial Control System Security and Protocol Reverse engineering.



Sharon: Secure and Efficient Cross-shard Transaction Processing via Shard Rotation

Shan Jiang (The Hong Kong Polytechnic University, Hong Kong); Jiannong Cao (Hong Kong Polytechnic Univ, Hong Kong); Cheung Leong Tung and Yuqin Wang (The Hong Kong Polytechnic University, China); Shan Wang (The Hong Kong Polytechnic University & Southeast University, China)

0
Recently, sharding has become a popular direction to scale out blockchain systems by dividing the network into shards that process transactions in parallel. However, secure and efficient cross-shard transaction processing remains a vital and unaddressed challenge. Existing work handles a cross-shard transaction via transaction division: dividing it into sub-transactions, processing them separately, and combing the processing results. Such an approach is unfavorable for decentralized blockchain due to its reliance on trustworthy parties, e.g., the client or a reference node, to perform the transaction division and result combination. Furthermore, the processing result of one transaction can affect another, violating the important property of transaction isolation. In this work, we propose Sharon, a novel sharding protocol that processes cross-shard transactions via shard rotation rather than transaction division. In Sharon, shards rotate to merge pairwisely and process cross-shard transactions when merged. Sharon eliminates reliance on trustworthy parties and provides transaction isolation in nature because transactions are no longer divided. Nevertheless, it poses a scientific question of when and how to merge the shards to improve system performance. To answer the question, we formally define the shard scheduling problem to minimize transaction confirmation latency and propose a novel construction algorithm.
Speaker Shan Jiang (The Hong Kong Polytechnic University)



Dynamic Learning-based Link Restoration in Traffic Engineering with Archie

Wenlong Ding and Hong Xu (The Chinese University of Hong Kong, Hong Kong)

0
Fiber cuts reduce network capacity and take a long time to fix in optical wide-area networks. It is important to select the best restoration plan that minimizes throughput loss by reconfiguring wavelengths on remaining healthy fibers for affected IP links. Recent work studies optimal restoration plan or ticket selection problem in traffic engineering (TE) in a one-shot setting of only one TE interval (5 minutes). Since fiber repair often takes hours, in this work, we extend to consider restoration ticket selection with traffic dynamics over multiple intervals.

To balance restoration performance with reconfiguration overhead, we perform dynamic ticket selection every T time steps. We propose an end-to-end learning approach to solve this T-step ticket selection problem as a classification task, combining traffic trend extraction and ticket selection in the same learning model. It uses convolution LSTM network to extract temporal and spatial features from past demand matrices to determine the ticket most likely to perform well T steps down the road, without predicting future traffic or solving any TE optimization. Trace-driven simulation shows that our new TE system, Archie, reduces over 25% throughput loss and is over 3500x faster than conventional demand prediction approach, which requires solving TE many times.
Speaker Wenlong Ding (The Chinese University of Hong Kong)

Wenlong Ding is currently pursuing his Ph.D. degree in Department of Computer Science and Engineering, The Chinese University of Hong Kong. He received his B.E. degree with honors in Computer Science and Technology from Huazhong University of Science and Technology, China, in 2021. His current research interests include machine learning for various network management tasks, with a specific focus on network traffic and configuration management tasks.


Session Chair

Dianqi Han (University of Texas at Arlington, USA)

Enter Zoom
Session C-11

C-11: User experience, Orchestration, and Telemetry

Conference
3:30 PM — 5:00 PM PDT
Local
May 23 Thu, 6:30 PM — 8:00 PM EDT
Location
Regency C

Vulture: Cross-Device Web Experience with Fine-Grained Graphical User Interface Distribution

Seonghoon Park and Jeho Lee (Yonsei University, Korea (South)); Yonghun Choi (Korea Institute of Science and Technology (KIST), Korea (South)); Hojung Cha (Yonsei University, Korea (South))

0
We propose a cross-device web solution, called Vulture, which distributes graphical user interface (GUI) elements of apps across multiple devices without requiring modifications of web apps or browsers. Several challenges should be resolved to achieve the goals. First, the server-peer configuration should be efficiently established to distribute web resources in cross-device web environments. Vulture exploits an in-browser virtual proxy that runs the web server's functionality in web browsers using a virtual HTTP scheme and a relevant API. Second, the functional consistency of web apps must be ensured in GUI-distributed environments. Vulture solves this challenge by providing a single-browser illusion with a two-tier document object models (DOM) architecture, which handles view state changes and user input seamlessly in cross-device environments. We implemented Vulture and extensively evaluated the system under various combinations of operating platforms, devices, and network capabilities while running 50 real web apps. The experiment results show that the proposed scheme provides functionally consistent cross-device web experiences by allowing fine-grained GUI distribution. We also confirmed that the in-browser virtual proxy reduces the GUI distribution time and the view change reproduction time by averages of 38.47% and 20.46%, respectively.
Speaker
Speaker biography is not available.

OpenINT: Dynamic In-Band Network Telemetry with Lightweight Deployment and Flexible Planning

Jiayi Cai (FuZhou University & Quan Cheng Laboratory, China); Hang Lin (Fuzhou University & Quan Cheng Laboratory, China); Tingxin Sun (Fuzhou University, China); Zhengyan Zhou (Zhejiang University, China); Longlong Zhu (Fuzhou University & Quan Cheng Laboratory, China); Haodong Chen (FuZhou University, China); Jiajia Zhou (Fuzhou University, China); Dong Zhang (Fuzhou University & Quan Cheng Laboratory, China); Chunming Wu (College of Computer Science, Zhejiang University, China)

0
The normal operation of data center network management tasks relies on accurate measurement of the network status. In-band Network Telemetry (INT) leverages programmable data planes to provide fine-grained and accurate network status. However, existing INT-related works have not considered the telemetry data required for dynamic adjustments of INT under uninterrupted conditions, including additions, deletions, and modifications. To address this issue, this paper proposes OpenINT, a lightweight and flexible In-band Network Telemetry system. The key innovation of OpenINT lies in decoupling telemetry operations in the data plane, using three generic sub-modules to achieve lightweight telemetry. Meanwhile, the control plane utilizes heuristic algorithms for dynamic planning to achieve near-optimal telemetry paths. Additionally, OpenINT provides primitives for defining network measurement tasks, which abstract the underlying telemetry architecture's details, enabling network operator to conveniently access network status. A prototype of OpenINT is implemented on a programmable switch equipped with the Tofino chip. Experimental results demonstrate that OpenINT achieves highly flexible dynamic telemetry and significantly reduces network overhead.
Speaker Jiayi Cai(Fuzhou University)

Jiayi Cai received the B.S. degree in Computer Science from Fuzhou University, Fuzhou, China. He is currently pursuing the M.S. degree in Computer Software and Theory from the College of Computer and Data Science, Fuzhou University. His research interests include Network Measurement and Programmable Data Plane.


Demeter: Fine-grained Function Orchestration for Geo-distributed Serverless Analytics

Xiaofei Yue, Song Yang and Liehuang Zhu (Beijing Institute of Technology, China); Stojan Trajanovski (Microsoft, United Kingdom (Great Britain)); Xiaoming Fu (University of Goettingen, Germany)

0
In the era of global service, low-latency analytics on large-volume geo-distributed data has been the regular demand for application decision-making. Serverless computing facilitates fast start-up and deployment, making it an attractive choice for geo-distributed analytics. We find that the serverless paradigm has the potential to breach current performance bottlenecks via fine-grained function orchestration. However, how to implement and configure it for geo-distributed analytics remains ambiguous. To fill this gap, we present Demeter, a scalable fine-grained function orchestrator in the geo-distributed serverless analytics system, which minimizes the average cost of co-existing jobs and satisfies user-specific Service Level Objectives (SLO). To handle the instability environment and learn diverse function resource demands, a Multi-Agent Reinforcement Learning (MARL) algorithm is used to co-optimize the function placement and resource allocation. The MARL extracts holistic and compact states via scalable hierarchical Graph Neural Networks (GNN) and then designs a novel actor network to reduce the decision space and model complexity. Finally, we implement the Demeter prototype and evaluate it using realistic analytics workloads. Compared with the state-of-the-art baselines, extensive experimental results show that Demeter significantly saves costs by 32.7% and 23.3% while eliminating SLO violations by 27.4%.
Speaker Xiaofei Yue (Beijing Institute of Technology)

Xiaofei Yue is currently a Ph.D. candidate at the School of Computer Science and Technology at Beijing Institute of Technology, Beijing, China. He received the M.E. degree from Northeastern University, Shenyang, China, in 2022. His main research interests include distributed systems, cloud/serverless computing, and data analytics.


Pscheduler: QoE-Enhanced MultiPath Scheduler for Video Services in Large-scale Peer-to-Peer CDNs

Dehui Wei and Jiao Zhang (Beijing University of Posts and Telecommunications, China); HaoZhe Li (ByteDance, China); Zhichen Xue (Bytedance Ltd., China); Jialin Li (National University of Singapore, Singapore); Yajie Peng (Bytedance, China); Xiaofei Pang (Non, China); Yuanjie Liu (Beijing University of Posts and Telecommunications, China); Rui Han (Bytedance Inc., China)

0
Video content providers such as Douyin implement Peer-to-Peer Content Delivery Networks (PCDNs) to reduce the costs associated with Content Delivery Networks (CDNs) while still maintaining optimal user-perceived quality of experience (QoE). PCDNs rely on the remaining resources of edge devices, such as edge access devices and hosts, to store and distribute data with a Multiple-Server-to-One-Client (MS2OC) communication pattern. MS2OC parallel transmission pattern suffers from severe data out-of-order issues. However, direct applying existing schedulers designed for MPTCP to PCDN fails to meet the two goals of high aggregate bandwidth and low end-to-end delivery latency.

To address this, we present the comprehensive detail of the Douyin self-developed PCDN video transmission system and propose the first QoE-enhanced packet-level scheduler for PCDN systems, called Pscheduler. Pscheduler estimates path quality using a congestion-control-decoupled algorithm and distributes data by the proposed path-pick-packet method to ensure smooth video playback. Additionally, a redundant transmission algorithm is proposed to improve the task download speed for segmented video transmission. Our large-scale online A/B tests, comprising 100,000 Douyin users that generate tens of millions of videos data, show that Pscheduler achieves an average improvement of 60% in goodput, 20% reduction in data delivery waiting time, and 30% reduction in rebuffering rate.
Speaker Dehui Wei (Beijing University of Posts and Telecommunications)

Dehui Wei is currently working toward her Ph.D. degree at the State Key Laboratory of Networking and Switching Technology of Beijing University of Posts and Telecommunications (BUPT). She received the B.E. degree in computer science and technology from Hunan University, Changsha, China, in 2019 and was awarded outstanding graduate. Her research interests are in the areas of network transmission control and cloud computing.


Session Chair

Eirini Eleni Tsiropoulou (University of New Mexico, USA)

Enter Zoom
Session D-11

D-11: Network Computing and Offloading

Conference
3:30 PM — 5:00 PM PDT
Local
May 23 Thu, 6:30 PM — 8:00 PM EDT
Location
Regency D

Analog In-Network Computing through Memristor-based Match-Compute Processing

Saad Saleh, Anouk S. Goossens, Sunny Shu and Tamalika Banerjee (University of Groningen, The Netherlands); Boris Koldehofe (TU Ilmenau, Germany)

0
Current network functions consume a significant amount of energy and lack the capacity to support more expressive learning models like neuromorphic functions. The major reason is the underlying transistor-based components that require continuous energy intensive data movements between the storage and computational units. In this research, we propose the use of a novel component, called Memristor,
which can colocalize computation and storage, and provide computational capabilities. Building on memristors, we propose the concept of match-compute processing for supporting energy-efficient network functions. Considering the analog processing of memristors, we propose a Probabilistic Content Addressable Memory (pCAM) abstraction which can provide analog match functions. pCAM provides deterministic and probabilistic outputs depending upon the closeness of match of incoming query with
the specified network policy. pCAM uses a crossbar array for line-rate matrix multiplications on the match outputs. We proposed a match-compute packet processing architecture and developed the programming abstractions for a baseline network function, i.e., Active Queue Management, which drops packets based upon the higher-order derivatives of sojourn times and buffer sizes. The analysis of match-compute processing over a physically fabricated memristor chip showed only 0.01 fJ/bit/cell of energy consumption, which is 50 times better than the match-action processing.
Speaker Saad Saleh (University of Groningen)

SAAD SALEH received the Master's and Bachelor's degrees in Electrical Engineering with majors in Networks and Communications. Currently, he is doing research on enabling cognitive and energy-efficient network functions using Memristor-based in-network processing architectures. This interdisciplinary research is in collaboration with the Groningen Cognitive Systems and Materials Center (CogniGron), The Netherlands.


Carlo: Cross-Plane Collaboration for Multiple In-network Computing Applications

Xiaoquan Zhang, Lin Cui and WaiMing Lau (Jinan University, China); Fung Po Tso (Loughborough University, United Kingdom (Great Britain)); Yuhui Deng (Jinan University, China); Weijia Jia (Beijing Normal University (Zhuhai) and UIC, China)

0
In-network computing (INC) is a new paradigm that allows applications to be executed within the network, rather than on dedicated servers. Conventionally, INC applications have been exclusively deployed on the data plane (e.g., programmable ASICs), offering impressive performance capabilities. However, the data plane's efficiency is hindered by limited resources, which can prevent a comprehensive deployment of applications. On the other hand, offloading compute tasks to the control plane, which is underpinned by general-purpose servers with ample resources, provides greater flexibility. However, this approach comes with the trade-off of significantly reduced efficiency, especially when the system operates under heavy load. To simultaneously exploit the efficiency of data plane and the flexibility of control plane, we propose Carlo, a cross-plane collaborative optimization framework to support the network-wide deployment of multiple INC applications. Carlo first analyzes resource requirements of various INC applications across different planes. It then establishes mathematical models for resource allocation in cross-plane and automatically generates solutions using proposed algorithms. We have implemented the prototype of Carlo on Intel Tofino ASIC switches and DPDK. Experimental results demonstrate that Carlo can compute solutions in a short time while ensuring that the deployment scheme does not suffer from performance degradation.
Speaker
Speaker biography is not available.

TileSR: Accelerate On-Device Super-Resolution with Parallel Offloading in Tile Granularity

Ning Chen and Sheng Zhang (Nanjing University, China); Yu Liang (Nanjing Normal University, China); Jie Wu (Temple University, USA); Yu Chen, Yuting Yan, Zhuzhong Qian and Sanglu Lu (Nanjing University, China)

0
Recent years have witnessed the unprecedented performance of convolutional networks in image super-resolution (SR). SR involves upscaling a single low-resolution image to meet application-specific image quality demands, making it vital for mobile devices. However, the excessive computational and memory requirements of SR tasks pose a challenge in mapping SR networks on a single resource-constrained mobile device, especially for an ultra-high target resolution. This work presents TileSR, a novel framework for efficient image SR through tile-granular parallel offloading upon multiple collaborative mobile devices. In particular, for an incoming image, TileSR first uniformly divides it into multiple tiles and selects the top-$K$ tiles with the highest upscaling difficulty (quantified by mPV). Then, we propose a tile scheduling algorithm based on multi-agent multi-armed bandit, which attains the accurate offload reward through the exploration phase, derives the tile packing decision based on the reward estimates, and exploits this decision to schedule the selected tiles. We have implemented TileSR fully based on COTS hardware, and the experimental results demonstrate that TileSR reduces the response latency by 17.77-82.2\% while improving the image quality by 2.38-10.57\% compared to other alternatives.
Speaker
Speaker biography is not available.

SECO: Multi-Satellite Edge Computing Enabled Wide-Area and Real-Time Earth Observation Missions

Zhiwei Zhai (Sun Yat-Sen University, China); Liekang Zeng (Hong Kong University of Science and Technology (Guangzhou) & Sun Yat-Sen University, China); Tao Ouyang and Shuai Yu (Sun Yat-Sen University, China); Qianyi Huang (Sun Yat-Sen University, China & Peng Cheng Laboratory, China); Xu Chen (Sun Yat-sen University, China)

0
Rapid advances in low Earth orbit (LEO) satellite technology and satellite edge computing (SEC) have facilitated a key role for LEO satellites in enhanced Earth observation missions (EOM). These missions typically require multi-satellite cooperative observations of a large region of interest (RoI) area, as well as the observation image routing and computation processing, enabling accurate and real-time responsiveness. However, optimizing the resources of LEO satellite networks is nontrivial in the presence of its dynamic and heterogeneous properties. To this end, we propose SECO, a SEC-enabled framework that jointly optimizes multi-satellite observation scheduling, routing and computation node selection for enhanced EOM. Specifically, in the observation phase, we leverage the orbital motion and the rotatable onboard cameras of satellites, proposing a distributed game-based scheduling strategy to minimize the overall size of captured images while ensuring full (observation) coverage. In the sequent routing and computation phase, we first adopt image splitting technology to achieve parallel transmission and computation. Then, we propose an efficient iterative algorithm to jointly optimize image splitting, routing and computation node selection for each captured image. On this basis, we propose a theoretically guaranteed system-wide greedy-based strategy to reduce the total time cost over simultaneous processing for multiple images.
Speaker
Speaker biography is not available.

Session Chair

Binbin Chen (Singapore University of Technology and Design, Singapore)

Enter Zoom
Session E-11

E-11: Machine Learning 5

Conference
3:30 PM — 5:00 PM PDT
Local
May 23 Thu, 6:30 PM — 8:00 PM EDT
Location
Regency E

Taming Subnet-Drift in D2D-Enabled Fog Learning: A Hierarchical Gradient Tracking Approach

Evan Chen (Purdue University, USA); Shiqiang Wang (IBM T. J. Watson Research Center, USA); Christopher G. Brinton (Purdue University, USA)

0
Federated learning (FL) encounters scalability challenges when implemented over fog networks. Semi-decentralized FL (SD-FL) proposes a solution that divides model cooperation into two stages: at the lower stage, device-to-device (D2D) communications is employed for local model aggregations within subnetworks (subnets), while the upper stage handles device-server (DS) communications for global model aggregations. However, existing SD-FL schemes are based on gradient diversity assumptions that become performance bottlenecks as data distributions become more heterogeneous. In this work, we develop semi-decentralized gradient tracking (SD-GT), the first SD-FL methodology that removes the need for such assumptions by incorporating tracking terms into device updates for each communication layer. Analytical characterization of SD-GT reveals convergence upper bounds for both non-convex and strongly-convex problems, for a suitable choice of step size. We employ the resulting bounds in the development of a co-optimization algorithm for optimizing subnet sampling rates and D2D rounds according to a performance-efficiency trade-off. Our subsequent numerical evaluations demonstrate that SD-GT obtains substantial improvements in trained model quality and communication cost relative to baselines in SD-FL and gradient tracking on several datasets.
Speaker
Speaker biography is not available.

Towards Efficient Asynchronous Federated Learning in Heterogeneous Edge Environments

Yajie Zhou (Zhejiang University, China); Xiaoyi Pang (Wuhan University, China); Zhibo Wang and Jiahui Hu (Zhejiang University, China); Peng Sun (Hunan University, China); Kui Ren (Zhejiang University, China)

0
Federated learning (FL) is widely used in edge environments as a privacy-preserving collaborative learning paradigm. However, edge devices often have heterogeneous computation capabilities and data distributions, hampering the efficiency of co-training. Existing works develop staleness-aware semi-asynchronous FL that reduces the contribution of slow devices to the global model to mitigate their negative impacts. But this makes data on slow devices unable to be fully leveraged in global model updating, exacerbating the effects of data heterogeneity. In this paper, to cope with both system and data heterogeneity, we propose a clustering and two-stage aggregation-based Efficient Asynchronous Federated Learning (EAFL) framework, which can achieve better learning performance with higher efficiency in heterogeneous edge environments. In EAFL, we first propose a gradient similarity-based dynamic clustering mechanism to cluster devices with similar system and data characteristics together dynamically during the training process. Then, we develop a novel two-stage aggregation strategy consisting of staleness-aware semi-asynchronous intra-cluster aggregation and data size-aware synchronous inter-cluster aggregation to efficiently and comprehensively aggregate training updates across heterogeneous clusters. With that, the negative impacts of slow devices and Non-IID data can be simultaneously alleviated, thus achieving efficient collaborative learning. Extensive experiments demonstrate that EAFL is superior to state-of-the-art methods.
Speaker Yajie Zhou (Zhejiang University)

Yajie Zhou received the BS degree from Huazhong University of Science and Technology, China, in 2023. She is currently working toward the PhD degree with the School of Cyber Science and Technology, Zhejiang University. Her main research interests include edge intelligence and Internet of Things.


Personalized Prediction of Bounded-Rational Bargaining Behavior in Network Resource Sharing

Haoran Yu and Fan Li (Beijing Institute of Technology, China)

0
There have been many studies leveraging bargaining to incentivize the sharing of network resources between resource owners and seekers. They predicted bargaining behavior and outcomes mainly by assuming that bargainers are fully rational and possess sufficient knowledge about their opponents. Our work addresses the prediction of bargaining behavior in network resource sharing scenarios where these assumptions do not hold, i.e., bargainers are bounded-rational and have heterogeneous knowledge. Our first key idea is using a multi-output Long Short-Term Memory (LSTM) neural network to learn bargainers' behavior patterns and predict both their discrete and continuous decisions. Our second key idea is assigning a unique latent vector to each bargainer, characterizing the heterogeneity among bargainers. We propose a scheme to jointly learn the LSTM weights and latent vectors from real bargaining data, and utilize them to achieve a personalized behavior prediction. We prove that estimating our LSTM weights corresponds to a special design of LSTM training, and also theoretically characterize the performance of our scheme. To deal with large-scale datasets in practice, we further propose a variant of our scheme to accelerate the LSTM training. Experiments on a large real-world bargaining dataset demonstrate that our schemes achieve more accurate personalized predictions than baselines.
Speaker Haoran Yu (Beijing Institute of Technology)

Haoran Yu received the Ph.D. degree from the Department of Information Engineering, the Chinese University of Hong Kong in 2016. From 2015 to 2016, he was a Visiting Student with the Yale Institute for Network Science and the Department of Electrical Engineering, Yale University. From 2018 to 2019, he was a Post-Doctoral Fellow with the Department of Electrical and Computer Engineering, Northwestern University. He is currently an Associate Professor with the School of Computer Science & Technology, Beijing Institute of Technology. His current research interests lie in the interdisciplinary area between game theory and artificial intelligence, with focuses on human strategic behavior prediction and private information inference. His past research is mainly about game theory in networks. His research work has been presented/published in top-tier conferences, including IEEE INFOCOM, ACM SIGMETRICS, ACM MobiHoc, IJCAI, AAAI, and journals, including IEEE/ACM TON, IEEE JSAC, and IEEE TMC.


PPGSpotter: Personalized Free Weight Training Monitoring Using Wearable PPG Sensor

Xiaochen Liu, Fan Li, Yetong Cao, Shengchun Zhai and Song Yang (Beijing Institute of Technology, China); Yu Wang (Temple University, USA)

0
Free weight training (FWT) is of utmost importance for physical well-being. However, the success of FWT depends on choosing the suitable workload, as improper selections can lead to suboptimal outcomes or injury. Current workload estimation approaches rely on manual recording and specialized equipment, with limited feedback. Therefore, we introduce PPGSpotter, a novel PPG-based system for FWT monitoring in a convenient, low-cost, and fine-grained manner. By characterizing the arterial geometry compressions caused by the deformation of distinct muscle groups during various exercises and workloads in PPG signals, PPGSpotter can infer essential FWT factors such as workload, repetitions, and exercise type. To remove pulse-related interference that heavily contaminates PPG signals, we develop an arterial interference elimination approach based on adaptive filtering, effectively extracting the pure motion-derived signal (MDS). Furthermore, we explore 2D representations within the phase space of MDS to extract spatiotemporal information, enabling PPGSpotter to address the challenge of resisting sensor shifts. Finally, we leverage a multi-task CNN-based model with workload adjustment guidance to achieve personalized FWT monitoring. Extensive experiments with 15 participants confirm that PPGSpotter can achieve workload estimation (0.59 kg RMSE), repetitions estimation (0.96 reps RMSE), and exercise type recognition (91.57% F1-score) while providing valid workload adjustment recommendations.
Speaker Xiaochen Liu (Beijing Institute of Technology, China)

Xiaochen Liu is now working toward the Ph.D. degree in the School of Computer Science at Beijing Institute of Technology, advised by Prof. Fan Li. She received her B.E. degree in Internet of Things from China University of Petroleum in 2020. Her research interests include Wearable Computing, Mobile Health, and the IoT. 


Session Chair

Yuval Shavitt (Tel-Aviv University, Israel)

Enter Zoom


Gold Sponsor


Gold Sponsor


Student Travel Grants


Student Travel Grants


Student Travel Grants

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · INFOCOM 2023 · © 2024 Duetone Corp.