IEEE INFOCOM 2024

Session A-8

A-8: Mobile Networks and Applications

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 10:30 AM — 12:00 PM CDT
Location
Regency A

AIChronoLens: Advancing Explainability for Time Series AI Forecasting in Mobile Networks

Claudio Fiandrino, Eloy Pérez Gómez, Pablo Fernández Pérez, Hossein Mohammadalizadeh, Marco Fiore and Joerg Widmer (IMDEA Networks Institute, Spain)

0
Next-generation mobile networks will increasingly rely on the ability to forecast traffic patterns for resource management. Usually, this translates into forecasting diverse objectives like traffic load, bandwidth, or channel spectrum utilization, measured over time. Among the other techniques, Long-Short Term Memory (LSTM) proved very successful for this task. Unfortunately, the inherent complexity of these models makes them hard to interpret and, thus, hampers their deployment in production networks. To make the problem worsen, EXplainable Artificial Intelligence (XAI) techniques, which are primarily conceived for computer vision and natural language processing, fail to provide useful insights: they are blind to the temporal characteristics of the input and only work well with highly rich semantic data like images or text. In this paper, we take the research on XAI for time series forecasting one step further proposing AIChronoLens, a new tool that links legacy XAI explanations with the temporal properties of the input. In such a way, AIChronoLens makes it possible to dive deep into the model behavior and spot, among other aspects, the hidden cause of errors. Extensive evaluations with real-world mobile traffic traces pinpoint model behaviors that would not be possible to spot otherwise and model performance can increase by 32%.
Speaker
Speaker biography is not available.

Characterizing 5G Adoption and its Impact on Network Traffic and Mobile Service Consumption

Sachit Mishra and André Felipe Zanella (IMDEA Networks Institute, Spain); Orlando E. Martínez-Durive (IMDEA Networks Institute & Universidad Carlos III de Madrid, Spain); Diego Madariaga (IMDEA Networks Institute, Spain); Cezary Ziemlicki (Orange labs, France); Marco Fiore (IMDEA Networks Institute, Spain)

0
The roll out of 5G, coupled with the traffic monitoring capabilities of modern industry-grade networks, offers an unprecedented opportunity to closely observe the impact that the introduction of a new major wireless technology has on the end users. In this paper, we seize such a unique chance, and carry out a first-of-its-kind in-depth analysis of 5G adoption along spatial, temporal and service dimensions. Leveraging massive measurement data about application-level demands collected in a nationwide 4G/5G network, we characterize the impact of the new technology on when, where and how mobile subscribers consume 5G traffic both in aggregate and for individual types of services. This lets us unveil the overall incidence of 5G in the total mobile network traffic, its spatial and temporal fluctuations, its effect on the way 5G services are consumed, the way individual services and geographical locations contribute to fluctuations in the 5G demand, as well as surprising connections between socioeconomic status of local populations and the way the 5G technology is presently consumed.
Speaker
Speaker biography is not available.

Exploiting Multiple Similarity Spaces for Efficient and Flexible Incremental Update of Mobile Applications

Lewei Jin (ZheJiang University, China); Wei Dong, Jiang BoWen, Tong Sun and Yi Gao (Zhejiang University, China)

0
Mobile application updates occur frequently, and they continue to add considerable traffic over the Internet. Differencing algorithms, which compute a small delta between the new version and old version, are often employed to reduce the update overhead. Transforming the old and new files into the decoded similarity spaces can drastically reduce the delta size. However, this transformation is often hindered by two practical reasons: (1) insufficient decoding. (2) long recompression time. To address this challenge, we have proposed two general approaches to transforming the compressed files into the full decoded similarity space and partial decoded similarity space, with low recompression time. The first approach uses recompression-aware searching mechanism, based on a general full decoding tool to transform deflate stream to the full decoded similarity space with a configurable searching complexity. The second approach uses a novel solution to transform a deflate stream into the partial decoded similarity space with differencing-friendly LZ77 token reencoding. We have also proposed an algorithm called MDiffPatch to exploit the full and partial decoded similarity spaces. Extensive evaluation results show that MDiffPatch achieves lower compression ratio than state-of-the-art algorithms and its tunable parameter allows us to achieve a good tradeoff between compression ratio and recompression time.
Speaker Lewei Jin (Zhejiang University)

Lewei Jin graduated with a bachelor's degree from Hangzhou University of Electronic Science and Technology. I am currently pursuing a PhD in Software Engineering at Zhejiang University, with a research interest in mobile application security.


LoPrint: Mobile Authentication of RFID-Tagged Items Using COTS Orthogonal Antennas

Yinan Zhu (The Hong Kong University of Science and Technology (HKUST), Hong Kong); Qian Zhang (Hong Kong University of Science and Technology, Hong Kong)

0
Authenticating RFID-tagged items during mobile inventory is a critical task for anti-counterfeiting. However, past authentication solutions using commercial off-the-shelf (COTS) devices cannot be applied in mobile scenarios, due to either high latency or non-robustness to tag movement. This paper introduces LoPrint, the first system to effectively authenticate mobile tagged items using the COTS orthogonal antennas existing in most infrastructures. The key insight of LoPrint is to randomly attach multiple tags on each item as a tag group and leverage the stable layout relationships of this tag group as novel fingerprints, including the relative distance matrix (RDM) and relative orientation matrix (ROM). Additionally, a new hardware fingerprint called cross-polarization ratio (CPR) is proposed to help distinguish the tag category. Furthermore, a lightweight approach is designed to robustly extract RDM, ROM, and CPR from RSSI and phase sequences. LoPrint is prototyped and deployed on a conveyor in a lab environment and a tunnel in a real-world RFID warehouse, where 726 tagged items with random layouts are used for evaluation. Experimental results show that LoPrint can achieve a high authentication accuracy of 82.92% on the fixed conveyor and 79.48% on the random warehouse trolley, outperforming the transferred state-of-the-art solution by over 10x.
Speaker Yinan Zhu (Hong Kong University of Science and Technology)

Yinan Zhu is currently a PhD candidate at the Department of Computer Science and Engineering, Hong Kong University of Science and Technology (HKUST).


Session Chair

Ruozhou Yu (North Carolina State University, USA)

Enter Zoom
Session B-8

B-8: Streaming Systems

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 10:30 AM — 12:00 PM CDT
Location
Regency B

Scout Sketch: Finding Promising Items in Data Streams

Tianyu Ma, Guoju Gao, He Huang, Yu-e Sun and Yang Du (Soochow University, China)

0
This paper studies a new but important pattern for items in data streams, called promising items. The promising items mean that the frequencies of an item in multiple continuous time windows show an upward trend overall, while a slight decrease in some of these windows is allowed. Many practical applications can benefit from the property of promising items, e.g., detecting potential hot events or news in social networks, preventing network congestion in communication channels, and monitoring latent attacks in computer networks. To accurately find promising items in data streams in real-time under limited memory space, we propose a novel structure named Scout Sketch, which consists of Filter and Finder. Filter is devised based on the Bloom filter to eliminate the ungratified items with less memory overload; Finder records some necessary information about the potential items and detects the promising items at the end of each time window, where we propose some tailor-made detection operations. We also analyze the theoretical performance of Scout Sketch. Finally, we conducted extensive experiments based on four real-world datasets. The experimental results show that the F1 Score and throughput of Scout Sketch are about 2.02 and 7.23 times that of the compared solutions, respectively.
Speaker
Speaker biography is not available.

Exstream: A Delay-minimized Streaming System with Explicit Frame Queueing Delay Measurement

Shinik Park, Sanghyun Han, Junseon Kim and Jongyun Lee (Seoul National University, Korea (South)); Sangtae Ha (University of Colorado Boulder, USA); Kyunghan Lee (Seoul National University, Korea (South))

0
Network fluctuations can cause unpredictable degradation of the user's quality of experience (QoE) on real-time video streaming. The intrinsic property of real-time video streaming, which generates delay-sensitive and chunk-based video frames, makes the situation even more complicated. Although previous approaches have tried to alleviate this problem by controlling the video bitrate based on the current network capacity estimate, they do not take into account the explicit queueing delay experienced by the video frame in determining the bitrate of upcoming video frames. To tackle this problem, we propose a new real-time video streaming system, Exstream, that can adapt to dynamic network conditions with the help of video bitrate control method and bandwidth estimation method designed to support real-time video streaming environments. \system explicitly estimates the queueing delay experienced by the video frame based on the transmission time budget that each frame can maximally utilize, which depends on the frame generation interval, and adjusts the bitrate of newly generated video frames to suppress the queueing delay level close to zero. Our comprehensive experiments demonstrate that Exstream achieves lower frame delay than four existing systems, Salsify, WebRTC, Skype, and Hangouts without frequent video frame skip.
Speaker Shinik Park (Seoul National University)



Emma: Elastic Multi-Resource Management for Realtime Stream Processing

Rengan Dou, Xin Wang and Richard T. B. Ma (National University of Singapore, Singapore)

0
In stream processing applications, an operator is often instantiated into multiple parallel execution instances, referred to as executors, to facilitate large-scale data processing. Due to unpredictable changes in executor workloads, data tuples processed by different executors may exhibit varying latency. The executor with the maximum latency significantly impacts the end-to-end latency. Existing solutions, such as load balancing and horizontal scaling, which involve workload migration, often incur substantial time overhead. In contrast, elastically scaling up/down resources of executors can offer rapid adaptability; however, prior works only considered CPU scaling.

This paper presents Emma, an elastic multi-resource manager. The core of Emma is a multi-resource provisioning plan that conducts performance analysis and resource adjustment in real-time. We explore the relationship between resources and performance experimentally and theoretically, guiding the plan to adaptively allocate the appropriate combination of resources to 1) accommodate the dynamic workload; 2) efficiently utilize resources to enhance the performance of as many executors as possible. Additionally, we propose an online learning method that makes the manager seamlessly adapt to diverse stream applications. We integrate Emma with Apache Samza, and our experiments show that compared to existing solutions, Emma can significantly reduce latency by orders of magnitude in real-world applications.
Speaker Rengan Dou (National University of Singapore)

Rengan Dou is a Ph.D student from the National University of Singapore. He received his bachelor's degree from the University of Science and Technology of China. His research interests include cloud computing, edge computing, and stream processing.


A Multi-Agent View of Wireless Video Streaming with Delayed Client-Feedback

Nouman Khan (University of Michigan, USA); Ujwal Dinesha (Texas A&M University, USA); Subrahmanyam Arunachalam (Texas A and M University, USA); Dheeraj Narasimha (Texas A&M University, USA); Vijay Subramanian (University of Michigan, USA); Srinivas G Shakkottai (Texas A&M University, USA)

0
We study the optimal control of multiple video streams over a wireless downlink from a base-transceiver-station (BTS)/access point to N end-devices (EDs). The BTS sends video packets to each ED under a joint transmission energy constraint, the EDs choose when to play out the received packets, and the collective goal is to provide a high Quality-of-Experience (QoE) to the clients/end-users. All EDs send feedback about their states and actions to the BTS which reaches it after a fixed deterministic delay. We analyze this team problem with delayed feedback as a cooperative Multi-Agent Constrained Partially Observable Markov Decision Process (MA-C-POMDP).

First, using a recently established strong duality result for MA-C-POMDPs, the original problem is decomposed into N independent unconstrained transmitter-receiver (two-agent) problems---all sharing a Lagrange multiplier (that also needs to be optimized for optimal control). Thereafter, the common information (CI) approach and the formalism of approximate information states (AISs) are used to guide the design of a neural-network based architecture for learning-based multi-agent control in a single unconstrained transmitter-receiver problem. Finally, simulations on a single transmitter-receiver pair with a stylized QoE model are performed to highlight the advantage of delay-aware two-agent coordination over the transmitter choosing both transmission and play-out actions (perceiving the delayed state of the receiver as its current state).
Speaker
Speaker biography is not available.

Session Chair

Srikanth V. Krishnamurthy (University of California, Riverside, USA)

Enter Zoom
Session C-8

C-8: Staleness and Age of Information (AoI)

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 10:30 AM — 12:00 PM CDT
Location
Regency C

An Analytical Approach for Minimizing the Age of Information in a Practical CSMA Network

Suyang Wang, Oluwaseun Ajayi and Yu Cheng (Illinois Institute of Technology, USA)

0
Age of information (AoI) is a crucial metric in modern communication systems, quantifying information freshness at its destination. This study proposes a novel and general approach utilizing stochastic hybrid systems (SHS) for AoI analysis and minimization in carrier sense multiple access (CSMA) networks. Specifically, we consider a practical yet general networking scenario where multiple nodes contend for transmission through a standard CSMA-based medium access control (MAC) protocol, and the tagged node under consideration uses a small transmission buffer for small AoI. We for the first time develop an SHS-based analytical model for this finite-buffer transmission system over the CSMA MAC. Moreover, we develop a creative method to incorporate the collision probability into the SHS model, with background nodes having heterogeneous traffic arrival rates. Our model enables us to analytically find the optimal sampling rate to minimize the AoI of the tagged node in a wide range of practical networking scenarios. Our analysis reveals insights into buffer size impacts when jointly optimizing throughput and AoI. The SHS model is cast over an 802.11-based MAC to examine the performance, with comparison to ns-based simulation results. The accuracy of the modeling and the efficiency of optimal sampling are convincingly demonstrated.
Speaker
Speaker biography is not available.

Reducing Staleness and Communication Waiting via Grouping-based Synchronization for Distributed Deep Learning

Yijun Li, Jiawei Huang, Zhaoyi Li, Jingling Liu, Shengwen Zhou, Wanchun Jiang and Jianxin Wang (Central South University, China)

0
Distributed deep learning has been widely employed to train deep neural network over large-scale dataset. However, the commonly used parameter server architecture suffers from long synchronization time in data-parallel training. Although the existing solutions are proposed to reduce synchronization overhead by breaking the synchronization barriers or limiting the staleness bound, they inevitably experience low convergence efficiency and long synchronization waiting. To address these problems, we propose Gsyn to reduce both synchronization overhead and staleness. Specifically, Gsyn divides workers into multiple groups. The workers in the same group coordinate with each other using the bulk synchronous parallel scheme to achieve high convergence efficiency, and each group communicates with parameter server asynchronously to reduce the synchronization waiting time, consequently increasing the convergence efficiency. Furthermore, we theoretically analyze the optimal number of groups to achieve a good tradeoff between staleness and synchronization waiting. The evaluation test in the realistic cluster with multiple training tasks demonstrates that Gsyn is beneficial and accelerates distributed training by up to 27% over the state-of-the-art solutions.
Speaker Yijun Li (Central South University)



An Easier-to-Verify Sufficient Condition for Whittle Indexability and Application to AoI Minimization

Sixiang Zhou (Purdue University, West Lafayette, USA); Xiaojun Lin (The Chinese University of Hong Kong, Hong Kong & Purdue University, West Lafayette (on Leave), USA)

0
We study a scheduling problem for a Base Station transmitting status information to multiple User Equipments (UE) with the goal of minimizing the total expected Age-of-Information (AoI). Such a problem can be formulated as a Restless Multi-Arm Bandit (RMAB) problem and solved asymptotically-optimally by a low-complexity Whittle index policy, if each UE's sub-problem is Whittle indexable. However, proving Whittle indexability can be highly non-trivial, especially when the value function cannot be derived in closed-form. In particular, this is the case for the AoI minimization problem with stochastic arrivals and unreliable channels, whose Whittle indexability remains an open problem. To overcome this difficulty, we develop a sufficient condition for Whittle indexability based on the notion of active time (AT). Even though the AT condition shares considerable similarity to the Partial Conservation Law (PCL) condition, it is much easier to understand and verify. We then apply our AT condition to the stochastic-arrival unreliable-channel AoI minimization problem and, for the first time in the literature, prove its Whittle indexability. Our proof uses a novel coupling approach to verify the AT condition, which may also be of independent interest to other large-scale RMAB problems.
Speaker
Speaker biography is not available.

Joint Optimization of Model Deployment for Freshness-Sensitive Task Assignment in Edge Intelligence

Haolin Liu and Sirui Liu (Xiangtan University, China); Saiqin Long (Jinan University, China); Qingyong Deng (Guangxi Normal University, China); Zhetao Li (Jinan University, China)

1
Edge Intelligence aims to push deep learning (DL) services to network edge to reduce response time and protect privacy. In implementations, proximity deployment of DL models and timely updates can improve the quality of experience (QoE) for users, but increase the operation cost as well as pose a challenge for task assignment. To address the challenge, a joint online optimization problem for DL model deployment (including placement and update) and freshness-sensitive task assignment is formulated to improve QoE and application service provider (ASP) profit. In the problem, we introduce the age of information (AOI) to quantify the freshness of the DL model and represent user QoE as an AOI based utility function. To solve the problem, an online model placement, update, and task assignment (MPUTA) algorithm is proposed. It first converts the time-slot coupled problem into a single time-slot problem using regularization technique, and decomposes the single time-slot problem into model deployment and task assignment subproblems. Then, using randomized round technique to deal with the model deployment subproblem and the graph matching technique to solve the task assignment subproblem. In simulation experiments, MPUTA is shown to outperform other benchmark algorithms in terms of both user QoE and ASP profit.
Speaker Sirui Liu(Xiangtan University)

SiRui Liu received the B.Eng degree from WuHan Polytechnic  University, China, in 2021,and is currently pursuing the Master's degree in Computer Technology at Xiangtan University in China. His research interests include edge intelligence and dynamic deep learning model deployment.


Session Chair

Hongwei Zhang (Iowa State University, USA)

Enter Zoom
Session D-8

D-8: Backscatter Networking

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 10:30 AM — 12:00 PM CDT
Location
Regency D

TRIDENT: Interference Avoidance in Multi-reader Backscatter Network Via Frequency-space Division

Yang Zou (TsingHua University, China); Xin Na (Tsinghua University, China); Xiuzhen Guo (Zhejiang University, China); Yimiao Sun and Yuan He (Tsinghua University, China)

0
Backscatter is an enabling technology for battery-free sensing in industrial IoT applications. For the purpose of full coverage of numerous tags in the deployment area, one often needs to deploy multiple readers, each of which is to communicate with tags within its communication range. But the actual backscattered signals from a tag are likely to reach a reader outside its communication range, causing undesired interference. Conventional approaches for interference avoidance, either TDMA or CSMA based, separate the readers' media accesses in the time dimension and suffer from limited network throughput. In this paper, we propose TRIDENT, a novel backscatter tag design that enables interference avoidance with frequency-space division. By incorporating a tunable bandpass filter and multiple terminal loads, a TRIDENT tag is able to detect its channel condition and adaptively adjust the frequency band and the power of its backscattered signals, so that all the readers in the network can operate concurrently without being interfered. We implement TRIDENT and evaluate its performance under various settings. The results demonstrate that TRIDENT enhances the network throughput by 3.18×, compared to the TDMA based scheme.
Speaker Yang Zou (Tsinghua University)

Yang Zou is currently a PhD. student at Tsinghua University. He received his B.E. degree from the Beijing University of Aeronautics and Astronautics (BUAA). His research interests include wireless networking and communication.


ConcurScatter: Scalable Concurrent OFDM Backscatter Using Subcarrier Pattern Diversity

Caihui Du (Beijing Institute of Techology, China); Jihong Yu (Beijing Institute of Technology, China); Rongrong Zhang (Capital Normal University, China); Jianping An (Beijing Institute of Technology, China)

0
Ambient OFDM backscatter communication has attracted considerable research efforts. Yet the prior works focus on point-to-point backscatter from a single tag, leaving behind efficient backscatter networking of multiple tags. In this paper, we design and implement ConcurScatter, the first ambient OFDM backscatter system that scales to concurrent transmission of hundreds of tags. Our key innovation is building and using the subcarrier pattern diversity to distinguish concurrent tags. This would yield linear collision states rather than exponential ones in the prior works based on the IQ domain diversity, supporting more concurrent transmission. We concrete this by designing a suit of techniques including midair frequency synthesis that forms a unique subcarrier pattern for each concurrent tag, non-integer cyclic shift that contributes to support more concurrent tags, and subcarrier pattern reconstruction that creates virtual subcarriers to enable single-symbol parallel decoding. The testbed experiment confirms that ConcurScatter supports seven more concurrent tags with similar BER and 8.4x higher throughput than the point-to-point backscatter RapidRider. The large-scale simulation shows that ConcurScatter supports 200 tags which is 40x more than the state-of-the-art concurrent OFDM backscatter FreeCollision.
Speaker Caihui Du (Beijing institue of technology)

Caihui Du received the B.E. degree from Beijing institute of technology, Beijing, China, in 2021. She is currently pursuing the Ph.D. degree with the School of Information and Electronics, Beijing Institute of Technology. Her research interests include ambient backscatter communication, and Internet-of-Things applications.


Efficient LTE Backscatter with Uncontrolled Ambient Traffic

Yifan Yang, Yunyun Feng and Wei Gong (University of Science and Technology of China, China); Yu Yang (City University of Hong Kong, Hong Kong)

0
Ambient LTE backscatter is a promising way to enable ubiquitous wireless communication with ultra-low power and cost. However, modulation in previous LTE backscatter systems relies heavily on the original data (content) of the signals.
They either demodulate tag data using an additional receiver to provide the content of the excitation or modulate on a few predefined reference signals in random ambient LTE traffic.
This paper presents CABLTE, a content-agnostic backscatter system that efficiently utilizes uncontrolled LTE PHY resources for backscatter communication using a single receiver. Our system is superior to prior work in two aspects: 1) Using one receiver to obtain tag data makes CABLTE more practical in real-world applications, and 2) Efficient modulation on LTE PYH resources improves the data rate of backscatter communication.
To obtain the tag data without knowing the ambient content, we design a checksum-based codeword translation method. We also propose a customized channel estimation scheme and a signal identification component in the backscatter system to ensure our accurate modulation and demodulation. Extensive experiments show that our CABLTE provides maximum tag throughput of 22 kbps, which is 3.67x higher than the content-agnostic system CAB and even 1.38x higher than the content-based system SyncLTE
Speaker Yifan Yang (University of Science and Technology of China)

Yifan Yang received his B.S. degree from the School of Computer Science and Technology, University of Science and Technology of China, Anhui Province, China in 2021. He is currently pursuing the Ph.D. degree at the School of Computer Science and Technology, University of Science and Technology of China, Anhui Province, China. He is also a joint Ph.D. student at the School of Data Science, City University of Hong Kong. His research interests include wireless networks and IoT.


Efficient Two-Way Edge Backscatter with Commodity Bluetooth

Maoran Jiang (University of Science and Technology of China, China); Xin Liu (The Ohio State University, USA); Li Dong (Macau University of Science and Technology, Macao); Wei Gong (University of Science and Technology of China, China)

0
Two-way backscatter is essential to general-purpose backscatter communication as it provides rich interaction to support diverse applications on commercial devices. However, existing Bluetooth backscatter systems suffer from unstable uplinks due to poor carrier-identification capability and inefficient downlinks caused by packet-length modulation. This paper proposes EffBlue, an efficient two-way backscatter design for commercial Bluetooth devices. EffBlue employs a simple edge backscatter server that alleviates the computational burden on the tag and helps build efficient uplinks and downlink. Specifically, efficient uplinks are designed by introducing an accurate synchronization scheme, which can effectively eliminate the use of non-compliant packets as carriers. To break the limitation of packet-level modulation, we design a new symbol-level WiFi-ASK downlink where the edge sends ASK-like WiFi signals and the tag can decode such signals using a simple envelope detector. We prototype the edge server using commodity WiFi and Bluetooth chips and build two-way backscatter tags with FPGAs. Experimental results show that EffBlue can identify the target excitations with more than 99% precision. Meanwhile, its WiFi-ASK downlink can achieve up to 124 kbps, which is 25x better than FreeRider.
Speaker Maoran Jiang (University of Science and Technology of China)

Maoran Jiang is a third-year Computer Science Ph.D. student at the University of Science and Technology of China. His research interests lie in wireless networks and ultra-low power systems.


Session Chair

Fernando A. Kuipers (Delft University of Technology, The Netherlands)

Enter Zoom
Session E-8

E-8: Machine Learning 2

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 10:30 AM — 12:00 PM CDT
Location
Regency E

Deep Learning Models As Moving Targets To Counter Modulation Classification Attacks

Naureen Hoque and Hanif Rahbari (Rochester Institute of Technology, USA)

0
Malicious entities use advanced modulation classification (MC) techniques to launch traffic analysis, selective jamming, evasion, and poison attacks. Recent studies show that current defense mechanisms against such attacks are static in nature and vulnerable to persistent adversaries who invest time and resources into learning the defenses, thereby being able to design and execute more sophisticated attacks to circumvent them. In this paper, we present a moving-target defense framework to support a novel modulation-masking mechanism we develop against advanced and persistent modulation classification attacks. The modulated symbols are masked using small perturbations before transmission to make it appear as if from another modulation scheme. By deploying a pool of deep learning models and perturbation generating techniques, the defense strategy keeps changing (moving) them when needed, making it difficult for adversaries to keep up with the defense system's changes over time. We show that the overall system performance remains unaffected under our technique. We further demonstrate that our masking technique, in addition to other existing defenses, can be learned and circumvented over time by a persistent adversary unless a moving target defense approach is adopted.
Speaker
Speaker biography is not available.

Deep Learning-based Modulation Classification of Practical OFDM signals for Spectrum Sensing

Byungjun Kim (UCSD, USA); Peter Gerstoft (University of California, San Diego, USA); Christoph F Mecklenbräuker (TU Wien, Austria)

0
In this study, the modulation of symbols on OFDM subcarriers is classified for transmissions following Wi-Fi 6 and 5G downlink specifications. First, our approach estimates the OFDM symbol duration and cyclic prefix length based on the cyclic autocorrelation function. We propose a feature extraction algorithm characterizing the modulation of OFDM signals, which includes removing the effects of a synchronization error. The obtained feature is converted into a 2D histogram of phase and amplitude and this histogram is taken as input to a convolutional neural network (CNN)-based classifier. The classifier does not require prior knowledge of protocol-specific information such as Wi-Fi preamble or resource allocation of 5G physical channels. The classifier's performance, evaluated using synthetic and real-world measured over-the-air (OTA) datasets, achieved a minimum accuracy of 97% accuracy with OTA data when SNR is above the value required for data transmission.
Speaker
Speaker biography is not available.

Resource-aware Deployment of Dynamic DNNs over Multi-tiered Interconnected Systems

Chetna Singhal (Indian Institute of Technology Kharagpur, India); Yashuo Wu (University of California Irvine, USA); Francesco Malandrino (CNR-IEIIT, Italy); Marco Levorato (University of California, Irvine, USA); Carla Fabiana Chiasserini (Politecnico di Torino & CNIT, IEIIT-CNR, Italy)

0
The increasing pervasiveness of intelligent mobile applications requires to exploit the full range of resources offered by the mobile-edge-cloud network for the execution of inference tasks. However, due to the heterogeneity of such multi-tiered networks, it is essential to make the applications' demand amenable to the available resources while minimizing energy consumption. Modern dynamic deep neural networks (DNN) achieve this goal by designing multi-branched architectures where early exits enable sample-based adaptation of the model depth. In this paper, we tackle the problem of allocating sections of DNNs with early exits to the nodes of the mobile-edge-cloud system. By envisioning a 3-stage graph-modeling approach, we represent the possible options for splitting the DNN and deploying the DNN blocks on the multi-tiered network, embedding both the system constraints and the application requirements in a convenient and efficient way. Our framework - named Feasible Inference Graph (FIN) - can identify the solution that minimizes the overall inference energy consumption while enabling distributed inference over the multi-tiered network with the target quality and latency. Our results, obtained for DNNs with different levels of complexity, show that FIN matches the optimum and yields over 65% energy savings relative to a state-of-the-art technique for cost minimization.
Speaker Chetna Singhal

Chetna Singhal is working as Assistant Professor in Electronics and Communication Engineering department at IIT Kharagpur.


Jewel: Resource-Efficient Joint Packet and Flow Level Inference in Programmable Switches

Aristide Tanyi-Jong Akem (IMDEA Networks Institute, Spain & Universidad Carlos III de Madrid, Spain); Beyza Butun (Universidad Carlos III de Madrid & IMDEA Networks Institute, Spain); Michele Gucciardo and Marco Fiore (IMDEA Networks Institute, Spain)

0
Embedding machine learning (ML) models in programmable switches realizes the vision of high-throughput and low-latency inference at line rate. Recent works have made breakthroughs in embedding Random Forest (RF) models in switches for either packet-level inference or flow-level inference. The former relies on simple features from packet headers that are simple to implement but limit accuracy in challenging use cases; the latter exploits richer flow features to improve accuracy, but leaves early packets in each flow unclassified. We propose Jewel, an in-switch ML model based on a fully joint packet-and flow-level design, which takes the best of both worlds by classifying early flow packets individually and shifting to flow-level inference when possible. Our proposal involves (i) a single RF model trained to classify both packets and flows, and (ii) hardware-aware model selection and training techniques for resource footprint minimization. We implement Jewel in P4 and deploy it in a testbed with Intel Tofino switches, where we run extensive experiments with a variety of real-world use cases. Results reveal how our solution outperforms four state-of-the-art benchmarks, with accuracy gains in the 2.2%-5.3% range.
Speaker Beyza Bütün

Beyza Bütün is a Ph.D. student in the Networks Data Science Group at IMDEA Networks Institute in Madrid, Spain. She is part of the project ECOMOME, which aims to model and optimise the energy consumption of networks. She is also a Ph.D. student in the Department of Telematics Engineering at Universidad Carlos III de Madrid, Spain. She holds a bachelor's and master's degree in Computer Engineering from Middle East Technical University in Ankara, Turkey. During her master's, she worked on the optimal design of wireless data center networks. Beyza's current research interest is in-band network intelligence, distributed in-band programming, and energy consumption optimization in the data plane.


Session Chair

Marilia Curado (University of Coimbra, Portugal)

Enter Zoom
Session F-8

F-8: Internet Architectures and Protocols

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 10:30 AM — 12:00 PM CDT
Location
Regency F

Efficient IPv6 Router Interface Discovery

Tao Yang and Zhiping Cai (National University of Defense Technology, China)

0
Efficient discovery of router interfaces on the IPv6 Internet is critical for network measurement and cybersecurity. However, existing solutions commonly suffer from inefficiencies due to a lack of initial probing targets (seeds), ultimately imposing limitations on large-scale IPv6 networks. Therefore, it is imperative to develop a methodology that enables the efficient collection of IPv6 router interfaces with limited resources, considering the impracticality of conducting a brute-force exploration across the extensive IPv6 address space.
In this paper, we introduce Treestrace, an innovative asynchronous prober specifically designed for this purpose. Without prior knowledge of the networks, this tool incrementally adjusts search directions, automatically prioritizing the survey of IPv6 address spaces with a higher concentration of IPv6 router interfaces. Furthermore, we have developed a carefully crafted architecture optimized for probing performance, allowing the tool to probe at the highest theoretically possible rate without requiring excessive computational resources.
Real-world tests show that Treestrace outperforms state-of-the-art works on both seed-based and seedless tasks, achieving at least a 5.57-fold efficiency improvement on large-scale IPv6 router interface discovery. With Treestrace, we discovered approximately 8 million IPv6 router interface addresses from a single vantage point within several hours.
Speaker Tao Yang (National University of Defence Technology)

Tao Yang received his B.Sc. and M.Sc. degrees in computer science and technology from the National University of Defense Technology, China, in 2019 and 2021, respectively. He is currently pursuing a Ph.D. degree at the same institution. His research interests include IPv6 scanning and network security.


DNSScope: Fine-Grained DNS Cache Probing for Remote Network Activity Characterization

Jianfeng Li, Zheng Lin, Xiaobo Ma, Jianhao Li and Jian Qu (Xi'an Jiaotong University, China); Xiapu Luo (The Hong Kong Polytechnic University, Hong Kong); Xiaohong Guan (Xi'an Jiaotong University & Tsinghua University, China)

0
The domain name system (DNS) is indispensable to nearly every Internet service. It has been extensively utilized for network activity characterization in passive and active approaches. Compared to the passive approach, active DNS cache probing is privacy-preserving and low-cost, enabling worldwide characterization of remote network activities in different networks. Unfortunately, existing probing-based methods are too coarse-grained to characterize the time-varying features of network activities, substantially limiting their applications in time-sensitive tasks. In this paper, we advance DNSScope, a fine-grained DNS cache probing framework by tackling three challenges: sample sparsity, observational distortion, and cache entanglement. DNSScope synthesizes statistical learning and self-supervised transfer learning to achieve time-varying characterization. Extensive evaluations demonstrate that it can accurately estimate the time-varying DNS query arrival rates on recursive DNS resolvers. Its average mean absolute error is 0.124, as low as one-sixth that of the baseline methods.
Speaker Jianhao Li (Xi’an Jiaotong University)

Jianhao Li is currently working toward the M.E. degree in Computer Science and Technology from Xi'an Jiaotong University, Xi'an, China. His research interests include cyber security and network  measurement.


An Elemental Decomposition of DNS Name-to-IP Graphs

Alex Anderson, Aadi Swadipto Mondal and Paul Barford (University of Wisconsin - Madison, USA); Mark Crovella (Boston University, USA); Joel Sommers (Colgate University, USA)

0
The Domain Name System (DNS) is a critical piece of Internet infrastructure with remarkably complex properties and uses, and accordingly has been extensively studied. In this study we contribute to that body of work by organizing and analyzing records maintained within the DNS as a bipartite graph. We find that relating names and addresses in this way uncovers a surprisingly rich structure. In order to characterize that structure, we introduce a new graph decomposition for DNS name-to-IP mappings, which we term elemental decomposition. In particular, we argue that (approximately) decomposing this graph into bicliques - maximally connected components - exposes this rich structure. We utilize large-scale censuses of the DNS to investigate the characteristics of the resulting decomposition, and illustrate how the exposed structure sheds new light on a number of questions about how the DNS is used in practice and suggests several new directions for future research.
Speaker
Speaker biography is not available.

Silent Observers Make a Difference: A Large-scale Analysis of Transparent Proxies on the Internet

Rui Bian (Expatiate Communications, USA); Lin Jin (University of Delaware, USA); Shuai Hao (Old Dominion University, USA); Haining Wang (Virginia Tech, USA); Chase Cotton (University of Delaware, USA)

0
Transparent proxies are widely deployed on the Internet, bridging the communications between clients and servers and providing desirable benefits to both sides, such as load balancing, security monitoring, and privacy enhancement. Meanwhile, they work in a silent way as clients and servers may know their existence. However, due to their invisibility and stealthiness, transparent proxies remain understudied for their behaviors, suspicious activities, and potential vulnerabilities that could be exploited by attackers. To better understand transparent proxies, we design and develop a framework to systematically investigate them in the wild. We identify two major types of transparent proxies, named FDR and CPV, respectively. FDR is a type of transparent proxy that independently performs Forced DNS Resolution during interception. CPV is a type of transparent proxy that presents Cache Poisoning Vulnerability. We perform a large-scale measurement to detect each type of transparent proxy and examine their security implications. In total, we identify 32,246 FDR and 11,286 CPV transparent proxies. We confirm that these two types of transparent proxies are distributed globally --- FDRs are observed in 98 countries and CPVs are observed in 51 countries. Our work highlights the issues of vulnerable transparent proxies and provides insights for mitigating such problems.
Speaker
Speaker biography is not available.

Session Chair

Klaus Wehrle (RWTH Aachen University, Germany)

Enter Zoom
Session G-8

G-8: Ethereum Networks and Smart Contracts

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 10:30 AM — 12:00 PM CDT
Location
Prince of Wales/Oxford

LightCross: Sharding with Lightweight Cross-Shard Execution for Smart Contracts

Xiaodong Qi and Yi Li (Nanyang Technological University, Singapore)

0
Sharding is a prevailing solution to enhance the scalability of current blockchain systems. However, the cross-shard commit protocols adopted in these systems to commit cross-shard transactions commonly incur multi-round shard-to-shard communication, leading to low performance. Furthermore, most solutions only focus on simple transfer transactions without supporting complex smart contracts, preventing sharding from widespread applications. In this paper, we propose LightCross, a novel blockchain sharding system that enables the efficient execution of complex cross-shard smart contracts. First, LightCross offloads the execution of cross-shard transactions into off-chain executors equipped with the TEE hardware, which can accommodate execution for arbitrarily complex contracts. Second, we design a lightweight cross-shard commit protocol to commit cross-shard transactions without multi-round shard-to-shard communication between shards. Last, LightCross lowers the cross-shard transaction ratio by dynamically changing the distribution of contracts according to historical transactions. We implemented the LightCross prototype based on the FISCO-BCOS project and evaluated it in real-world blockchain environments, showing that LightCross can achieve 2.6x more throughput than state-of-the-art sharding systems.
Speaker
Speaker biography is not available.

ConFuzz: Towards Large Scale Fuzz Testing of Smart Contracts in Ethereum

Taiyu Wong, Chao Zhang and Yuandong Ni (Institute for Network Sciences and Cyberspace, Tsinghua University, China); Mingsen Luo (University of Electronic Science and Technology of China, China); HeYing Chen (University of Science and Technology of China, China); Yufei Yu (Tsinghua University, China); Weilin Li (University of Science and Technology of China, China); Xiapu Luo (The Hong Kong Polytechnic University, Hong Kong); Haoyu Wang (Huazhong University of Science and Technology, China)

0
Fuzzing is effective at finding vulnerabilities in traditional applications and has been adapted to smart contracts. However, existing fuzzing solutions for smart contracts are not smart enough and can hardly be applied to large-scale testing since they heavily rely on source code or ABI. In this paper, we propose a fuzzing solution ConFuzz applicable to large-scale testing, especially for bytecode-only contracts. ConFuzz adopts Adaptive Interface Recovery (AIR) and Function Information Collection (FIC) algorithms to automatically recover the function interfaces and information, supporting fuzzing smart contracts without source code or ABI. Furthermore, ConFuzz employs a Dependence-based Transaction Sequence Generation (DTSG) algorithm to infer dependencies of transactions and generate high-quality sequences to trigger the vulnerabilities. Lastly, ConFuzz utilizes taint analysis and function information to help detect harmful vulnerabilities and reduce false positives. The experiment shows that ConFuzz can accurately recover over 99.7% of function interfaces and reports more vulnerabilities than state-of-the-art solutions with 98.89% precision and 93.69% accuracy. On all 1.4M unique contracts from Ethereum, ConFuzz found over 11.92% vulnerable contracts. To the best of our knowledge, ConFuzz is the first efficient and scalable solution to test all smart contracts deployed in Ethereum.
Speaker Taiyu Wong(Tsinghua University)

Blockchain engineer at Tsinghua University


Deanonymizing Ethereum Users behind Third-Party RPC Services

Shan Wang, Ming Yang, Wenxuan Dai and Yu Liu (Southeast University, China); Yue Zhang (Drexel University, USA); Xinwen Fu (University of Massachusetts Lowell, USA)

0
Third-party RPC services have become the mainstream way for users to access Ethereum. In this paper, we present a novel deanonymization attack that can link an Ethereum address to a real-world identity such as IP address of a user who accesses Ethereum via a third-party RPC service. We find that RPC API calls result in distinguishable sizes of encrypted TCP packets. An attacker can then find when a user sends a transaction to a RPC provider and immediately send a beacon transaction after the user transaction. By exploiting the differences in the distributions of inter-arrival time intervals of normal transactions and two simultaneously initiated transactions, the attacker can identify the victim transaction in the Ethereum network. This enables the attacker to correlate the Ethereum address of the victim transaction's initiator with the source IP address of TCP packets from a victim user. We model the attack through empirical measurements and conduct extensive real-world experiments to validate the effectiveness of our attack. With three optimization strategies, the correlation accuracy can reach to 98.70% and 96.60% respectively in Ethereum testnet and mainnet. We are the first to study the deanonymization of Ethereum users behind third-party RPC services.
Speaker Shan Wang

Shan Wang is currently a Postdoctoral Fellow in the Department of Computing, The Hong Kong Polytechnic University. She obtained her Ph.D. degree from Southeast University. In the period from Sep. 2019 to May 2023, she studied at UMass Lowell, USA as a visiting scholar. Her past work mainly focuses on the security problems in permissioned blockchain. Currently, she is working on application of crypto in blockchain and de-anonymization in public blockchain.


DEthna: Accurate Ethereum Network Topology Discovery with Marked Transactions

Chonghe Zhao (Shenzhen University, China); Yipeng Zhou (Macquarie University, Australia); Shengli Zhang and Taotao Wang (Shenzhen University, China); Quan Z. Sheng (Macquarie University, Australia); Song Guo (The Hong Kong University of Science and Technology, Hong Kong)

0
In Ethereum, the ledger exchanges messages along an underlying Peer-to-Peer (P2P) network to reach consistency. Understanding the underlying network topology of Ethereum is crucial for network optimization, security and scalability. However, the accurate discovery of Ethereum network topology is non-trivial due to its deliberately designed security mechanism. Consequently, existing measuring schemes cannot accurately infer the Ethereum network topology with a low cost. To address this challenge, we propose the Distributed Ethereum Network Analyzer (DEthna) tool, which can accurately and efficiently measure the Ethereum network topology. In DEthna, a novel parallel measurement model is proposed that can generate marked transactions to infer link connections based on the transaction replacement and propagation mechanism in Ethereum. Moreover, a workload offloading scheme is designed so that DEthna can be deployed on multiple probing nodes so as to measure a large-scale Ethereum network at a low cost. We run DEthna on Goerli (the most popular Ethereum test network) to evaluate its capability in discovering network topology. The experimental results demonstrate that DEthna significantly outperforms the state-of-the-art baselines. Based on DEthna, we further analyze characteristics of the Ethereum blockchain network which reveals that there exist more than 50% low-degree Ethereum nodes weakening network robustness.
Speaker
Speaker biography is not available.

Session Chair

Wenhai Sun (Purdue University, USA)

Enter Zoom
Session Break-3-1

Coffee Break

Conference
10:00 AM — 10:30 AM PDT
Local
May 23 Thu, 12:00 PM — 12:30 PM CDT
Location
Regency Foyer & Hallway

Enter Zoom
Session A-9

A-9: Crowdsourcing and crowdsensing

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 12:30 PM — 2:00 PM CDT
Location
Regency A

Seer: Proactive Revenue-Aware Scheduling for Live Streaming Services in Crowdsourced Cloud-Edge Platforms

Shaoyuan Huang, Zheng Wang, Zhongtian Zhang and Heng Zhang (Tianjin University, China); Xiaofei Wang (Tianjin Key Laboratory of Advanced Networking, Tianjin University, China); Wenyu Wang (Shanghai Zhuichu Networking Technologies Co., Ltd., China)

0
As live streaming services skyrocket, Crowdsourced Cloud-edge service Platforms (CCPs) have surfaced as pivotal intermediaries catering to the mounting demand. Despite the role of stream scheduling to CCP's Quality of Service (QoS) and revenue, conventional optimization strategies struggle to enhancing CCP's revenue, primarily due to the intricate relationship between server utilization and revenue. Additionally, the substantial scale of CCPs magnifies the difficulties of time-intensive scheduling. To tackle these challenges, we propose Seer, a proactive revenue-aware scheduling system for live streaming services in CCPs. The design of Seer is motivated by meticulous measurements of real-world CCP environments, which allows us to achieve accurate revenue modeling and overcome three key obstacles that hinder the integration of prediction and optimal scheduling. Utilizing an innovative Pre-schedule-Execute-Re-schedule paradigm and flexible scheduling modes, Seer achieves efficient revenue-optimized scheduling in CCPs. Extensive evaluations demonstrate Seer's superiority over competitors in terms of revenue, utilization, and anomaly penalty mitigation, boosting CCP revenue by 147% and expediting scheduling 3.4 times faster.
Speaker Damien Saucez
Damien Saucez is a researcher at Inria Sophia Antipolis since 2011. He's current research interest is Software Defined Networking (SDN) with a particular focus on resiliency and robustness for very large networks. He's actively working to promote reproducibility in research by leading the ACM SIGCOMM 2017 Reproducibility Workshop and by chairing the ACM SIGCOMM and ACM CoNEXT Artifacts Evaluation Committee.

QUEST: Quality-informed Multi-agent Dispatching System for Optimal Mobile Crowdsensing

Zuxin Li, Fanhang Man and Xuecheng Chen (Tsinghua University, China); Susu Xu (Stony Brook University, USA); Fan Dang (Tsinghua University, China); Xiao-Ping (Steven) Zhang (Tsinghua Shenzhen Internation Graduate School, China); Xinlei Chen (Tsinghua University, China)

0
In this work, we address the challenges in achieving optimal Quality of Information (QoI) for non-dedicated vehicular Mobile Crowdsensing (MCS) systems, by utilizing vehicles not originally designed for sensing purposes to provide real-time data while moving around the city. These challenges include the coupled sensing coverage and sensing reliability, as well as the uncertainty and time-varying vehicle status. To tackle these issues, we propose QUEST, a QUality-informed multi-agEnt diSpaTching system, that ensures high sensing coverage and sensing reliability in non-dedicated vehicular MCS. QUEST optimizes QoI by introducing a novel metric called ASQ (aggregated sensing quality), which considers both sensing coverage and sensing reliability jointly. Additionally, we design a mutual-aided truth discovery dispatching method to estimate sensing reliability and improve ASQ under uncertain vehicle statuses. Real-world data from our deployed MCS system in a metropolis is used for evaluation, demonstrating that QUEST achieves up to 26% higher ASQ improvement, leading to reduction of reconstruction map errors by 32-65% for different reconstruction algorithms.
Speaker Ahmed Imteaj
Ahmed Imteaj is an Assistant Professor of the School of Computing at Southern Illinois University, Carbondale. He is the director of the Security, Privacy and Edge intElligence for Distributed networks Laboratory (SPEED Lab). He received his Ph.D. in Computer Science from Florida International University in 2022, earning the distinction of being an FIU Real Triumph Graduate, where he received his M.Sc. with the Outstanding Master's Degree Graduate Award. Prior to that, Ahmed received a B.Sc. degree in Computer Science and Engineering from Chittagong University of Engineering and Technology. His research interests encompass a wide range of fields, including Federated Learning, Generative AI, Interdependent Networks, Cybersecurity, and IoT. Ahmed has made significant contributions to the domains of privacy-preserving distributed machine learning and IoT, with his research published in prestigious conferences and peer-reviewed journals. He has received several accolades, such as the 2022 Outstanding Student Life: Graduate Scholar of the Year Award, 2021 Best Graduate Student in Research Award from FIU's Knight Foundation School of Computing and Information Sciences.

Combinatorial Incentive Mechanism for Bundling Spatial Crowdsourcing with Unknown Utilities

Hengzhi Wang, Laizhong Cui and Lei Zhang (Shenzhen University, China); Linfeng Shen and Long Chen (Simon Fraser University, Canada)

0
Incentive mechanisms in Spatial Crowdsourcing (SC) have been widely studied as they provide an effective way to motivate mobile workers to perform spatial tasks. Yet, most existing mechanisms only involve single tasks, neglecting the presence of complementarity and substitutability among tasks. This limits their effectiveness in practice cases. Motivated by this, we consider task bundles for incentive mechanism design and closely analyze the mutual exclusion effect that arises with task bundles. We then develop a combinatorial incentive mechanism, including three key policies: In the offline case, we propose a combinatorial assignment policy to address the conflict between mutual exclusion and assignment efficiency. We next study the conflict between mutual exclusion and truthfulness, and build a combinatorial pricing policy to pay winners that yields both incentive compatibility and individual rationality. In the online case with unknown workers' utilities, we present an online combinatorial assignment policy that balances the exploration-exploitation trade-off under the mutual exclusion constraints. Through theoretical analysis and numerical simulations using real-world mobile networking datasets, we demonstrate the effectiveness of the proposed mechanism.
Speaker
Speaker biography is not available.

Few-Shot Data Completion for New Tasks in Sparse CrowdSensing

En Wang, Mijia Zhang and Bo Yang (Jilin University, China); Yang Xu (Hunan University, China); Zixuan Song and Yongjian Yang (Jilin University, China)

0
Mobile Crowdsensing is a type of technology that utilizes mobile devices and volunteers to gather data about specific topics at large scales in real-time. However, in practice, limited participation leads to missing data, i.e., the collected data may be sparse, which makes it difficult to perform accurate analysis. A possible technique called sparse crowdsensing incorporates the sparse case with data completion, where unsensed data could be estimated through inference. However, sparse crowdsensing typically suffers from poor performance during the data completion stage due to various challenges: the sparsity of the sensed data, reliance on numerous timeslots, and uncertain spatiotemporal connections. To resolve such few-shot issues, the proposed solution uses the Correlated Data Fusion for Matrix Completion (CDFMC) approach, which leverages a small amount of objective data to retrain an auxiliary dataset-based pre-trained model that can estimate unsensed data efficiently. CDFMC is trained using a combination of the traditional Deep Matrix Factorization and the Kalman Filtering, which not only enables the efficient representation and comparison of data samples but also fuses the objective data and auxiliary data effectively. Evaluation results show that the proposed CDFMC outperforms baseline techniques, achieving high accuracy in completing unsensed data with minimal training data.
Speaker Mijia Zhang (Jilin University)

Mijia Zhang received his Ph.D. degree in computer system architecture from Jilin University, Changchun, China, in 2023. His current work focuses on sparse Mobile CrowdSensing, Neural Networks, spatiotemporal data inference and matrix completion.


Session Chair

Srinivas Shakkottai (Texas A&M University, USA)

Enter Zoom
Session B-9

B-9: Localization and Tracking

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 12:30 PM — 2:00 PM CDT
Location
Regency B

ATP: Acoustic Tracking and Positioning under Multipath and Doppler Effect

Guanyu Cai and Jiliang Wang (Tsinghua University, China)

0
Acoustic tracking and positioning technologies using microphones and speakers have gained significant interest for applications like virtual reality, augmented reality, and IoT devices. However, existing methods still face challenges in real-world deployment due to multipath interference, Doppler frequency shift, and sampling frequency offset between devices. We propose a versatile Acoustic Tracking and Positioning (ATP) method to address these challenges. First, we propose an iterative sampling frequency offset calibration method. Next, We propose a Doppler frequency shift estimation and compensation model. Finally, we propose a fast adaptive algorithm to reconstruct the line-of-sight (LOS) signal under multipath. We implement ATP in Android and PC and compare it with eight different methods. Evaluation results show that ATP achieves mean accuracy of 0.66 cm, 0.56 cm, and 1.0 cm in tracking, ranging, and positioning tasks. It is 2×, 6×, and 5.8× better than the state-of-the-art methods. ATP advances acoustic sensing for practical applications by providing a robust solution for real-world environments.
Speaker Guanyu Cai (Tsinghua University)



EventBoost: Event-based Acceleration Platform for Real-time Drone Localization and Tracking

Hao Cao, Jingao Xu, Danyang Li and Zheng Yang (Tsinghua University, China); Yunhao Liu (Tsinghua University & The Hong Kong University of Science and Technology, China)

0
Drones have demonstrated their pivotal role in various applications such as search-and-rescue, smart logistics, and industrial inspection, with accurate localization playing an indispensable part. However, in high dynamic range and rapid motion scenarios, traditional visual sensors often face challenges in pose estimation. Event cameras, with their high temporal resolution, present a fresh opportunity for perception in such challenging environments. Current efforts resort to event-visual fusion to enhance the drone's sensing capability. Yet, the lack of efficient event-visual fusion algorithms and corresponding acceleration hardware causes the potential of event cameras to remain underutilized. In this paper, we introduce EventBoost, an acceleration platform designed for drone-based applications with event-image fusion. We propose a suit of novel algorithms through software-hardware co-design on Zynq SoC, aimed at enhancing real-time localization precision and speed. EventBoost achieves enhanced visual fusion precision and markedly elevated processing efficiency. The performance comparison with two state-of-the-art systems shows EventBoost achieves a 24% im- provement in accuracy 24.33% with a 30 ms latency on resource- constrained platforms. We further substantiate EventBoost's exemplary performance through real-world application cases.
Speaker Hao Cao (Tsinghua University)

Hao Cao is a Ph.D. candidate in the School of Software at Tsinghua University, Beijing, China. He received his B.E. degree from the College of Intelligence and Computing, Tianjin University, in 2019. His research interests lie in the Internet of Things and Mobile Computing.


BLE Location Tracking Attacks by Exploiting Frequency Synthesizer Imperfection

Yeming Li, Hailong Lin, Jiamei Lv, Yi Gao and Wei Dong (Zhejiang University, China)

0
In recent years, Bluetooth Low Energy (BLE) has become one of the most wildly used wireless protocols and it is common that users carry one or more BLE devices. With the extensive deployment of BLE devices, there is a significant privacy risk if these BLE devices can be tracked. However, the common wisdom suggests that the risk of BLE location tracking is negligible. The reason is that researchers believe there are no stable BLE fingerprints that are stable across different scenarios (e.g., temperatures) for different BLE devices with the same model. In this paper, we introduce a novel physical-layer fingerprint named Transient Dynamic Fingerprint (TDF), which originated from the negative feedback control process of the frequency synthesizer. Because of the hardware imperfection, the dynamic features of the frequency synthesizer are different, making TDF unique among different devices, even with the same model. Furthermore, TDF keeps stable under different thermal conditions. Based on TDF, we propose BTrack, a practical BLE device tracking system and evaluate its tracking performance in different environments. The results show BTrack works well once BLE beacons are effectively received. The identification accuracy is 35.38%-57.41% higher than the existing method, and stable over temperatures, distances, and locations.
Speaker Yeming Li (Zhejiang University)

Currently a Ph.D candidate in College of Computer Science, Zhejiang University, Hangzhou, China. He received his bachelor's degree from the Zhejiang University of Technology, Hangzhou, China. His research intreset is IoT, wireless communication, and BLE.


ORAN-Sense: Localizing Non-cooperative Transmitters with Spectrum Sensing and 5G O-RAN

Yago Lizarribar (IMDEA Networks, Spain); Roberto Calvo-Palomino (Universidad Rey Juan Carlos, Spain); Alessio Scalingi (IMDEA Networks, Spain); Giuseppe Santaromita (IMDEA Networks Institute, Spain); Gérôme Bovet (Armasuisse, Switzerland); Domenico Giustiniano (IMDEA Networks Institute, Spain)

0
Crowdsensing networks for the sole purpose of performing spectrum measurements have resulted in prior initiatives that have failed primarily due to their costs for maintenance. In this paper, we take a different view and propose ORAN-Sense, a novel architecture of \ac{iot} spectrum crowd-sensing devices integrated into the Next Generation of cellular networks. We use this framework to extend the capabilities of 5G networks and localize a transmitter that does not collaborate in the process of positioning. While 5G signals can not be applied to this scenario as the transmitter does not participate in the localization process through dedicated pilot symbols and data, we show how to use Time Difference of Arrival-based positioning using low-cost spectrum sensors, minimizing hardware impairments of low-cost spectrum receivers, introducing methods to address errors caused by over-the-air signal propagation, and proposing a low-cost synchronization technique. We have deployed our localization network in two major cities in Europe. Our experimental results indicate that signal localization of non-collaborative transmitters is feasible even using low-cost radio receivers with median accuracies of tens of meters with just a few sensors spanning cities, which makes it suitable for its integration in the Next Generation of cellular networks.
Speaker
Speaker biography is not available.

Session Chair

Jin Nakazato (The University of Tokyo, Japan)

Enter Zoom
Session C-9

C-9: Internet of Things (IoT) Networks

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 12:30 PM — 2:00 PM CDT
Location
Regency C

DTMM: Deploying TinyML Models on Extremely Weak IoT Devices with Pruning

Lixiang Han, Zhen Xiao and Zhenjiang Li (City University of Hong Kong, Hong Kong)

0
DTMM is a library designed for efficient deployment and execution of machine learning models on weak IoT devices such as microcontroller units (MCUs). The motivation for designing DTMM comes from the emerging field of tiny machine learning (TinyML), which explores extending the reach of machine learning to many low-end IoT devices to achieve ubiquitous intelligence. Due to the weak capability of embedded devices, it is necessary to compress models by pruning enough weights before deploying. Although pruning has been studied extensively on many computing platforms, two key issues with pruning methods are exacerbated on MCUs: models need to be deeply compressed without significantly compromising accuracy, and they should perform efficiently after pruning. Current solutions only achieve one of these objectives, but not both. In this paper, we find that pruned models have great potential for efficient deployment and execution on MCUs. Therefore, we propose DTMM with pruning unit selection, pre-execution pruning optimizations, runtime acceleration, and post-execution low-cost storage to fill the gap for efficient deployment and execution of pruned models. It can be integrated into commercial ML frameworks for practical deployment, and a prototype system has been developed. Extensive experiments on various models show promising gains compared to state-of-the-art methods.
Speaker
Speaker biography is not available.

Memory-Efficient and Secure DNN Inference on TrustZone-enabled Consumer IoT Devices

Xueshuo Xie (Haihe Lab of ITAI, China); Haoxu Wang, Zhaolong Jian and Li Tao (Nankai University, China); Wei Wang (Beijing Jiaotong University, China); Zhiwei Xu (Haihe Lab of ITAI); Guiling Wang (New Jersey Institute of Technology, USA)

0
Edge intelligence enables resource-demanding DNN inference without transferring original data, addressing concerns about data privacy in consumer IoT devices. For privacy-sensitive applications, deploying models in hardware-isolated trusted execution environments (TEEs) becomes essential. However, the limited secure memory in TEEs poses challenges for deploying DNN inference, and alternative techniques like model partitioning and offloading introduce performance degradation and security issues. In this paper, we present a novel approach for advanced model deployment in TrustZone that ensures comprehensive privacy preservation during model inference. We design a memory-efficient management methods to support memory-demanding inference in TEEs. By adjusting the memory priority, we effectively mitigate memory leakage risks and memory overlap conflicts, resulting in just 32 lines of code alterations in the trusted operating system. Additionally, we leverage two tiny libraries: S-Tinylib (2,538 LoCs), a tiny deep learning library, and Tinylibm (827 LoCs), a tiny math library, to support efficient inference in TEEs. We implemented a prototype on Raspberry Pi 3B+ and evaluated it using three well-known lightweight DNN models. The experimental results demonstrate that our design significantly improves inference speed by 3.13 times and reduces power consumption by over 66.5% compared to non-memory optimization methods in TEEs.
Speaker Haoxu Wang (Nankai University)

Haoxu Wang received his B.S. degree in computer science and technology from Shandong University in 2021. He is currently working toward his MA.SC degree in College of Computer Science, Nankai University. His main research interests include Trusted Execution Environment, Internet of Things, machine learning, and edge computing.


VisFlow: Adaptive Content-Aware Video Analytics on Collaborative Cameras

Yuting Yan, Sheng Zhang, Xiaokun Wang, Ning Chen and Yu Chen (Nanjing University, China); Yu Liang (Nanjing Normal University, China); Mingjun Xiao (University of Science and Technology of China, China); Sanglu Lu (Nanjing University, China)

0
There is an increasing demand for the query of live surveillance video streams via large-scale camera networks, such as for applications in public safety and intelligent cities. To deal with the conflict between computing-intensive detection models and limited resources on cameras, a detection-with-tracking framework has gained prominence. Nevertheless, due to the susceptibility of trackers to occlusions and new object appearances, frequent detections are required to calibrate the results, leading to varying detection demands dependent on video content. Consequently, we propose a mechanism for content-aware analytics on collaborative cameras, denoted as VisFlow, to increase the quality of detections and achieve the latency requirement by fully utilizing camera resources. We formulate such a problem as a non-linear, integer program in long-term scope, to maximize the detection accuracy. An online mechanism, underpinned by a queue-based algorithm and randomized rounding, is then devised to dynamically orchestrate detection workloads among cameras, thus adapting to fluctuating detection demands. Through rigorous proof, both dynamic regret regarding overall accuracy, and the transmission budget are ensured in the long run. The testbed experiments on Jetson Kits demonstrate that VisFlow improves accuracy by 18.3% over the alternatives.
Speaker
Speaker biography is not available.

SAMBA: Detecting SSL/TLS API Misuses in IoT Binary Applications

Kaizheng Liu, Ming Yang and Zhen Ling (Southeast University, China); Yuan Zhang (Fudan University, China); Chongqing Lei (Southeast University, China); Lan Luo (Anhui University of Technology, China); Xinwen Fu (University of Massachusetts Lowell, USA)

0
IoT devices are increasingly adopting Secure Socket Layer (SSL) and Transport Layer Security (TLS) protocols. However, the misuse of SSL/TLS libraries still threatens the communication. Existing tools for detecting SSL/TLS API misuses primarily rely on source code analysis while IoT applications are usually released as binaries with no source code. This paper presents SAMBA, a novel tool to automatically detect SSL/TLS API misuses in IoT binaries through static analysis. To overcome the path explosion problem and deal with various SSL/TLS implementations, we introduce a three-level reduction method to construct the SSL/TLS API-centric graph (SAG), which has a much smaller size compared with the conventional interprocedural control flow graph. We propose a formal expression of API misuse signatures, which is capable of capturing different types of misuse, particularly those in the SSL/TLS connection establishment process. We successfully analyze 115 IoT binaries and find that 94 of them have the vulnerability of insecure certificate verification and 112 support deprecated SSL/TLS protocols. SAMBA is the first IoT binary analysis system for detecting SSL/TLS API misuses.
Speaker Kaizheng Liu (Southeast University)



Session Chair

Zhangyu Guan (University at Buffalo, USA)

Enter Zoom
Session D-9

D-9: RFID and Wireless Charging

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 12:30 PM — 2:00 PM CDT
Location
Regency D

RF-Boundary: RFID-Based Virtual Boundary

Xiaoyu Li and Jia Liu (Nanjing University, China); Xuan Liu (Hunan University, China); Yanyan Wang (Hohai University, China); Shigeng Zhang (Central South University, China); Baoliu Ye and Lijun Chen (Nanjing University, China)

0
A boundary is a physical or virtual line that marks the edge or limit of a specific region, which has been widely used in many applications, such as autonomous driving, virtual wall, and robotic lawn mowers. However, none of existing work can well balance the cost, the deployability, and the scalability of a boundary. In this paper, we propose a new RFID-based boundary scheme together with its detection algorithm called RF-Boundary, which has the competitive advantages of being battery-free, low-cost, and easy-to-maintain. We develop two technologies of phase gradient and dual-antenna DOA to address the key challenges posed by RF-boundary, in terms of lack of calibration information and multi-edge interference. We implement a prototype of RF-Boundary with commercial RFID systems and a mobile robot. Extensive experiments verify the feasibility as well as the good performance of RF-Boundary.
Speaker Xiaoyu Li (Nanjing University)

Xiaoyu Li is currently a PhD. student at Nanjing University. He received his B.E. degree from Huazhong University of Science and Technology (HUST). His research interests include wireless sensing and communication.


Safety Guaranteed Power-Delivered-to-Load Maximization for Magnetic Wireless Power Transfer

Wangqiu Zhou, Xinyu Wang, Hao Zhou and ShenYao Jiang (University of Science and Technology of China, China); Zhi Liu (The University of Electro-Communications, Japan); Yusheng Ji (National Institute of Informatics, Japan)

0
Electromagnetic radiation (EMR) safety has always been a critical reason for hindering the development of magnetic-enabled wireless power transfer technology. People focus on the actual received energy at charging devices while paying attention to their health. Thus, we study this significant problem in this paper, and propose a universal safety guaranteed power-delivered-to-load (PDL) maximization scheme (called SafeGuard). Technically, we first utilize the off-the-shelf electromagnetic simulator to perform the EMR distribution analysis to ensure the universality of the method. Then, we innovatively introduce the concept of multiple importance sampling for achieving efficient EMR safety constraint extraction. Finally, we treat the proposed optimization problem as an optimal boundary point search problem from the perspective of space geometry, and devise a brand-new grid-based multi-constraint parallel processing algorithm to efficiently solve it. We implement a system prototype for SafeGuard, and conduct extensive experiments to evaluate it. The results indicate that our SafeGuard can obviously improve the achieved PDL by up to 1.75× compared with the state-of-the-art baseline while guaranteeing EMR safety. Furthermore, SafeGuard can accelerate the solution process by 29.12× compared with the traditional numerical method to satisfy the fast optimization requirement of wireless charging systems.
Speaker Junaid Ahmed Khan
Junaid Ahmed Khan is a PACCAR endowed Assistant Professor in Electrical and Computer Engineering at Western Washington University. Previously, he has been a research associate at the Center for Urban Science and Progress (CUSP) and Connected Cities with Smart Transportation (C2SMART) center at New York University from September 2019 to September 2020. He was also a research fellow at the DRONES and Smart City research clusters at the FedEx Institute of Technology, University of Memphis from January 2018 to August 2019. He has worked as a senior researcher at Inria Agora team, CITI lab at the National Institute of Applied Sciences (INSA), Lyon, France from October 2016 to January 2018. He has a Ph.D in Computer Science from Université Paris-Est, Marne-la-Vallée, France in November 2016. His research interests are Cyber Physical Systems with emphasis on Connected Autonomous Vehicles (CAVs) and Internet of Things (IoTs).

Dynamic Power Distribution Controlling for Directional Chargers

Yuzhuo Ma, Dié Wu and Jing Gao (Sichuan Normal University, China); Wen Sun (Northwestern Polytechnical University, China); Jilin Yang and Tang Liu (Sichuan Normal University, China)

0
Recently, deploying static chargers to construct timely and robust Wireless Rechargeable Sensor Networks (WRSNs) has become an important research issue for solving the limited energy problem of wireless sensor networks. However, the established fixed power distribution lacks flexibility in response to dynamic charging requests from sensors and may render some sensors to be continuously impacted by destructive wave interference. This results in a gap between energy supply and practical demand, making the charging process less efficient. In this paper, we focus on the real-time sensor charging requests and formulate a dynamic power dis\underline{T}ribut\underline{I}on controlling for \underline{D}irectional charg\underline{E}rs (TIDE) problem to maximize the overall charging utility. To solve the problem, we first build a charging model for directional chargers while considering wave interference and extract the candidate charging orientations from the continuous search space. Then we propose the neighbor set division method to narrow the scope of calculation. Finally, we design a dynamic power distribution controlling algorithm to update the neighbor sets timely and select optimal orientations for chargers. Our experimental results demonstrate the effectiveness and efficiency of the proposed scheme, it outperforms the comparison algorithms by 142.62\% on average.
Speaker Yuzhuo Ma (Sichuan Normal University)

Yuzhuo Ma received the BS degree in mechanical engineering from Soochow University, Suzhou, China, in 2019. She is studying towards the MS degree in the College of Computer Science, Sichuan Normal University. Her research interests focus on wireless rechargeable sensor network. 


LoMu: Enable Long-Range Multi-Target Backscatter Sensing for Low-Cost Tags

Yihao Liu, Jinyan Jiang and Jiliang Wang (Tsinghua University, China)

0
Backscatter sensing has shown great potential in the Internet of Things (IoT) and has attracted substantial research interest. We present LoMu, the first long-range multi-target backscatter sensing system for low-cost tags under ambient LoRa. LoMu analyzes the received low-SNR backscatter signals from different tags and calculates their phases to derive the motion information. The design of LoMu faces practical challenges including near-far interference between multiple tags, phase offsets induced by unsynchronized transceivers, and phase errors due to frequency drift in low-cost tags. We propose a conjugate-based energy concentration method to extract high-quality signals and a Hamming-window-based method to alleviate the near-far problem. We then leverage the relationship between the excitation signal and backscatter signals to synchronize TX and RX. Finally, we combine the double sidebands of backscatter signals to cancel the tag frequency drift. We implement LoMu and conduct extensive experiments to evaluate its performance. The results demonstrate that LoMu can accurately sense 35 tags at the same time. The average frequency sensing error is 0.7% at 400m, which is 4x distance of the state-of-the-art.
Speaker Yihao Liu (Tsinghua University)

Yihao Liu received the BE degree from the School of Software, Tsinghua University, China, in 2023. He is currently working toward the Master's degree in the School of Software, at Tsinghua University. His research interests include low-power wide-area networks, wireless sensing, and the Internet of Things.


Session Chair

Filip Maksimovic (Inria, France)

Enter Zoom
Session E-9

E-9: Machine Learning 3

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 12:30 PM — 2:00 PM CDT
Location
Regency E

Parm: Efficient Training of Large Sparsely-Activated Models with Dedicated Schedules

Xinglin Pan (Hong Kong Baptist University, Hong Kong); Wenxiang Lin and Shaohuai Shi (Harbin Institute of Technology, Shenzhen, China); Xiaowen Chu (The Hong Kong University of Science and Technology (Guangzhou) & The Hong Kong University of Science and Technology, Hong Kong); Weinong Sun (The Hong Kong University of Science and Technology, Hong Kong); Bo Li (Hong Kong University of Science and Technology, Hong Kong)

0
Sparsely-activated Mixture-of-Expert (MoE) layers have found practical applications in enlarging the model size of large-scale foundation models, with only a sub-linear increase in computation demands. Despite the wide adoption of hybrid parallel paradigms like model parallelism, expert parallelism, and expert-sharding parallelism (i.e., MP+EP+ESP) to support MoE model training on GPU clusters, the training efficiency is hindered by communication costs introduced by these parallel paradigms. To address this limitation, we propose Parm, a system that accelerates MP+EP+ESP training by designing two dedicated schedules for placing communication tasks. The proposed schedules eliminate redundant computations and communications and enable overlaps between intra-node and inter-node communications, ultimately reducing the overall training time. As the two schedules are not mutually exclusive, we provide comprehensive theoretical analyses and derive an automatic and accurate solution to determine which schedule should be applied in different scenarios. Experimental results on an 8-GPU server and a 32-GPU cluster demonstrate that Parm outperforms the state-of-the-art MoE training system, DeepSpeed-MoE, achieving 1.13x-5.77x speedup on 1296 manually configured MoE layers and approximately 3x improvement on two real-world MoE models based on BERT and GPT-2.
Speaker
Speaker biography is not available.

Predicting Multi-Scale Information Diffusion via Minimal Substitution Neural Networks

Ranran Wang (University of Electronic Science and Technology of China, China); Yin Zhang (University of Electronic Science and Technology, China); Wenchao Wan and Xiong Li (University of Electronic Science and Technology of China, China); Min Chen (Huazhong University of Science and Technology, China)

0
Information diffusion prediction is a complex task due to the numerous variables present in large social platforms like Weibo and Twitter. While many researchers have focused on the internal influence of individual cascades, they often overlook other influential factors that affect information diffusion. These factors include competition and cooperation among information, the attractiveness of information to users, and the potential impact of content anticipation on further diffusion. Traditional methods relying on individual information modeling struggle to consider these aspects comprehensively. To address the above issues, we propose a multi-scale information diffusion prediction method with a minimal substitution neural network, called MIDPMS. Specifically, to simultaneously enable macro-scale popularity prediction and micro-scale diffusion prediction, we model information diffusion as a substitution process among different information sources. Furthermore, considering the life cycle of content, user preferences, and potential content anticipation, we introduce minimal substitution theory and design a minimal substitution neural network to model this substitution system and facilitate joint training of macroscopic and microscopic diffusion prediction. The extensive experiments on Weibo and Twitter datasets demonstrate MIDPMS significantly outperforms the state-of-art methods over two datasets on both multi-scale tasks.
Speaker Ranran Wang (University of Electronic Science and Technology of China, China)

Ranran Wang is currently a PhD candidate of the School of Information and Communication Engineering, University of Electronic Science and Technology of China. Her main research interests include edge intelligence, cognitive wireless communications, graph learning.


Online Resource Allocation for Edge Intelligence with Colocated Model Retraining and Inference

Huaiguang Cai (Sun Yat-Sen University, China); Zhi Zhou (Sun Yat-sen University, China); Qianyi Huang (Sun Yat-Sen University, China & Peng Cheng Laboratory, China)

0
Due to several kinds of drift, the traditional computing paradigm of deploying a trained model and then performing inference has not been able to meet the accuracy requirements. Accordingly, a new computing paradigm that, retraining the model and performing inference simultaneously on new data after the model is deployed, emerged (we call it model inference and retraining co-location). The key challenge is how to allocate computing resources for model retraining and inference to improve long-term accuracy, especially when computing resources are changing dynamically.
We address this challenge by modeling the relationship between model performance and different retraining and inference configurations first and then propose a linear complexity online algorithm (named \ouralg).
\ouralg solves the original non-convex, integer, time-coupled problem approximately by adjusting the proportion between model retraining and inference according to available real-time computing resources. The competitive ratio of \ouralg is strictly better than the tight competitive ratio of the Inference-Only algorithm (corresponding to the traditional computing paradigm) when data drift occurs for a sufficiently lengthy time, implying the advantages and applications of model inference and retraining co-location paradigm. In particular, \ouralg translates to several heuristic algorithms in different environments. Experiments based on real scenarios confirm the effectiveness of \ouralg.
Speaker
Speaker biography is not available.

Tomtit: Hierarchical Federated Fine-Tuning of Giant Models based on Autonomous Synchronization

Tianyu Qi and Yufeng Zhan (Beijing Institute of Technology, China); Peng Li (The University of Aizu, Japan); Yuanqing Xia (Beijing Institute of Technology, China)

0
With the quick evolution of giant models, the paradigm of pre-training models and then fine-tuning them for downstream tasks has become increasingly popular. The adapter has been recognized as an efficient fine-tuning technique and attracts much research attention. However, adapter-based fine-tuning still faces the challenge of lacking sufficient data. Federated fine-tuning has been recently proposed to fill this gap, but existing solutions suffer from a serious scalability issue, and they are inflexible in handling dynamic edge environments. In this paper, we propose Tomtit, a hierarchical federated fine-tuning system that can significantly accelerate fine-tuning and improve the energy efficiency of devices. Via extensive empirical study, we find that model synchronization schemes (i.e., when edges and devices should synchronize their models) play a critical role in federated fine-tuning. The core of Tomtit is a distributed design that allows each edge and device to have a unique synchronization scheme with respect to their heterogeneity in model structure, data distribution and computing capability. Furthermore, we provide a theoretical guarantee about the convergence of Tomtit. Finally, we develop a prototype of Tomtit and evaluate it on a testbed. Experimental results show that it can significantly outperform the state-of-the-art.
Speaker Tianyu Qi (Beijing Institute of Technology, China)

Tianyu Qi, received BS degree from China University of Geosciences, Wuhan, China, in 2021. He is currently pursuing the MS degree in the School of Automation at the Beijing Institute of Technology, Beijing, China. His research interests include federated learning, cloud computing, and machine learning.


Session Chair

Marco Fiore (IMDEA Networks Institute, Spain)

Enter Zoom
Session F-9

F-9: Hashing, Clustering, and Optimization

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 12:30 PM — 2:00 PM CDT
Location
Regency F

IPFS in the Fast Lane: Accelerating Record Storage with Optimistic Provide

Dennis Trautwein (University of Göttingen, Germany & Protocol Labs Inc., USA); Yiluo Wei (Hong Kong University of Science & Technology (GZ), China); Ioannis Psaras (Protocol Labs & University College London, United Kingdom (Great Britain)); Moritz Schubotz (FIZ-Karlsruhe, Germany); Ignacio Castro (Queen Mary University of London, United Kingdom (Great Britain)); Bela Gipp (University of Göttingen, Germany); Gareth Tyson (The Hong Kong University of Science and Technology & Queen Mary University of London, Hong Kong)

1
The centralization of web services has raised concerns about critical single points of failure, such as content hosting, name resolution, and certification. To address these issues, the "Decentralized Web" movement advocates for decentralized alternatives. Distributed Hash Tables (DHTs) have emerged as a key component facilitating this movement as they offer efficient key/value indexing. The InterPlanetary File System (IPFS) exemplifies this approach by leveraging DHTs for data indexing and distribution. A critical finding of previous studies is that PUT performance for record storage is unacceptably slow, sometimes taking minutes to complete and hindering the adoption of delay-intolerant applications. To address this challenge, this research paper presents three significant contributions. First, we present the design of Optimistic Provide, an approach to accelerate DHT PUT operations in Kademlia-based IPFS networks while maintaining full backward compatibility. Second, we implement and deploy the mechanism and see its usage in the de-facto IPFS deployment, Kubo. Third, we evaluate its effectiveness in the IPFS and Filecoin DHTs. We confirm that we enable sub-second record storage from North America and Europe for 90% of PUT operations while reducing networking overhead by over 40% and maintaining record availability.
Speaker Dennis Trautwein (University of Göttingen)

Dennis Trautwein is a PhD candidate at the University of Göttingen working with Prof. Dr. Bela Gipp. Further he is a Research Engineer at IPShipyard who maintains the IPFS, libp2p, and monitoring infrastructure for both projects. He completed his Bachelor’s degree in extraterrestrial Physics and his Master’s degree in solid-state Physics at the CAU in Kiel before diving into topics revolving around decentralization and peer-to-peer networks in general. In his spare time, he enjoys playing the guitar and nature around Lake Constance.


Fast Algorithms for Loop-Free Network Updates using Linear Programming and Local Search

Radu Vintan (EPFL, Switzerland); Harald Raecke (TU Munich, Germany); Stefan Schmid (TU Berlin, Germany)

1
To meet stringent performance requirements, communication networks are becoming increasingly programmable and flexible, supporting fast and frequent adjustments. However, reconfiguring networks in a dependable and transiently consistent manner is known to be algorithmically challenging. This paper revisits the fundamental problem of how to update the routes in a network in a (transiently) loop-free manner, considering both the Strong Loop-Freedom (SLF) and the Relaxed Loop-Freedom (RLF) property.

We present two fast algorithms to solve the SLF and RLF problem variants exactly, to optimality. Our algorithms are based on a parameterized integer linear program which would be intractable to solve directly by a classic solver. Our main technical contribution is a lazy cycle breaking strategy which, by adding constraints lazily, improves performance dramatically, and outperforms the state-of-the-art exact algorithms by an order of magnitude on realistic medium-sized networks. We further explore approximate algorithms and show that while a relaxation approach is relatively slow, with a local search approach short update schedules can be found, outperforming the state-of-the-art heuristics.

On the theoretical front, we also provide an approximation lower bound for the update time of the state-of-the-art algorithm in the literature. We made all our code and implementations publicly available.
Speaker
Speaker biography is not available.

The Reinforcement Cuckoo Filter

Meng Li and Wenqi Luo (Nanjing University, China); Haipeng Dai (Nanjing University, China & State Key Laboratory for Novel Software Technology, China); Huayi Chai (University of Nanjing, China); Rong Gu (Nanjing University, China); Xiaoyu Wang (Soochow University, China); Guihai Chen (Shanghai Jiao Tong University, China)

0
In this paper, we consider the approximate membership testing problem on skewed data traces, in which some hot or popular items repeat frequently. Previous solutions suffer from either high false positive rates or low lookup throughput. To address this problem, we propose a variant of the cuckoo filter, enhanced with a hotness-aware suffix cache. We note that a false positive item must have a matched fingerprint in the cuckoo filter, and propose to reduce false positives by memorizing them, but with their suffixes only. For each false positive item, we apply a linear-congruential-based hash function and then divide the hash value into three parts: the bucket index to be accessed in the cuckoo filter, the fingerprint to be stored in the cuckoo filter, and the suffix to be cached. Combing the three parts, a hot false positive item can be uniquely identified and can be avoided. Our evaluation results indicate that RCF significantly outperforms non-adaptive filters on skewed data traces. Given the same memory size, it achieves a much lower false positive ratio without sacrificing its lookup throughput. Compared with adaptive filters, RCF provides a competitive false positive ratio while offering a considerably higher (30 − 100×) lookup throughput.
Speaker Wenqi Luo (Nanjing University)



Multi-Order Clustering on Dynamic Networks: On Error Accumulation and Its Elimination

Yang Gao and Hongli Zhang (Harbin Institute of Technology, China)

1
Local clustering aims to find a high-quality cluster near a given vertex. Recently, higher-order units are introduced to local clustering, and the underlying information has been verified to be essential. However, original edges are underestimated in these techniques, leading to the degeneration of network information. Moreover, most of the higher-order models are designed for static networks, whereas real-world networks are generally large and evolve rapidly. Repeatedly conducting a static algorithm at each snapshot is usually computationally impractical, and recent approaches instead track a cluster by updating the cluster sequentially. However, errors would accumulate over lengthy evolutions, and the complete cluster needs to be recalculated periodically to maintain the accuracy, which naturally affects the efficiency. To bridge the two gaps, we design a multi-order hypergraph, and present a hybrid model for dynamic clustering. In particular, we propose an incremental method to track a personalized PageRank vector in the evolving hypergraph, which converges to the exact solution at each snapshot when significantly reducing the complexity. We further develop a dynamic sweep to identify a cut in each vector, whereby a cluster can be incrementally updated with no accumulated errors. We provide rigorous theoretical basis and conduct comprehensive experiments, which demonstrate the effectiveness.
Speaker Yang Gao (Harbin Institute of Technology)

Yang Gao received the B.S. degree in mathematics from Jilin University, Changchun, China, in 2009, and the Ph.D. degree in computer science from Harbin Institute of Technology, Harbin, China, in 2019. Currently, he is an assistant professor with School of Cyberspace Science, Harbin Institute of Technology. His research interests include network and information security, and graph theory.


Session Chair

Mario Pickavet (Ghent University - imec, Belgium)

Enter Zoom
Session G-9

G-9: Modeling and Optimization

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 12:30 PM — 2:00 PM CDT
Location
Prince of Wales/Oxford

AnalyticalDF: Analytical Model for Blocking Probabilities Considering Spectrum Defragmentation in Spectrally-Spatially Elastic Optical Networks

Imran Ahmed and Roshan Kumar Rai (South Asian University, India); Eiji Oki (Kyoto University, Japan); Bijoy Chand Chatterjee (South Asian University, India)

0
Recently, multi-core and multi-mode fibres (MCMMFs) have been considered to overcome physical limitations and increase transport capacity. They are combined with elastic optical networks (EONs) to form spectrally-spatially elastic optical networks (SS-EONs), an emerging technology. Fragmentation and crosstalk (XT) are well-known drawbacks of SS-EONs that increase blocking probability; evaluating blocking probability analytically is difficult due to additional constraints. When calculating blocking probabilities in MCMMFs-based SS-EONs, it is unsurprising that all current studies either employ simulation-based techniques or do not consider defragmentation of their analytical models. This paper proposes AnalyticalDF, an exact analytical continuous-time Markov chain model for blocking probabilities in SS-EONs, which considers defragmentation and the XT-avoided approach. AnalyticalDF generates all possible states and transitions while avoiding inter-core and inter-mode XTs for single-class and multi-class requests. Single-class requests utilize the same number of slots, whereas multi-class requests adopt varying numbers of slots to accommodate client needs. We introduce an iterative approximation model for a single-hop link when AnalyticalDF is not tractable due to scalability. We extend the single-hop model for multi-hop networks further. We evaluate AnalyticalDF, the iterative approximate model, and simulation studies for a single-hop link. The numerical results indicate that AnalyticalDF outperforms a non-defragmentation-aware benchmark model.
Speaker
Speaker biography is not available.

Modeling Average False Positive Rates of Recycling Bloom Filters

Kahlil A Dozier, Loqman Salamatian and Dan Rubenstein (Columbia University, USA)

0
Bloom Filters are a space-efficient data structure used for testing membership in a set that errs only in the false positive direction. However, the standard analysis that measures this false positive rate provides a form of worst case bound that is overly conservative for the majority of network applications that utilize Bloom Filters, and reduces accuracy by not taking into account the Bloom Filter state (number of bits set) after each arrival. In this paper, we more accurately characterize the false positive dynamics of Bloom Filters as they are commonly used in networking applications. Network applications often utilize a Bloom Filter that "recycles": it repeatedly fills, empties and fills again. Users of a Recycling Bloom Filter are often best served by the average false positive rates as opposed to the worst case. We efficiently compute the average false positive rate of a recycling Bloom Filter as a Markov model and derive exact expressions for the long-term false positive rate. We apply our model to the standard Bloom Filter and a "two-phase" variant, verify model accuracy with simulations, and find that the previous worst-case formulation leads to a reduction in the efficiency of Bloom Filter when applied in network applications.
Speaker
Speaker biography is not available.

On Ultra-Sharp Queueing Bounds

Florin Ciucu and Sima Mehri (University of Warwick, United Kingdom (Great Britain)); Amr Rizk (University of Duisburg-Essen, Germany)

0
We present a robust method to analyze a broad range of classical queueing models, e.g., the $GI/G/1$ queue with renewal arrivals, an $AR/G/1$ queue with alternating renewals (AR), as a special class of Semi-Markovian processes, and Markovian fluids queues. At the core of the method lies a standard change-of-measure argument to reverse the sign of the \textit{negative} drift in the underlying random walks. Combined with a suitable representation of the overshoot, we obtain exact results in terms of series. Closed-form and computationally fast bounds follow by taking the series' first terms, which are the dominant ones because of the \textit{positive} drift under the new probability measure. The obtained bounds generalize the state-of-the-art class of martingale bounds and can be much sharper by orders of magnitude.
Speaker
Speaker biography is not available.

Optimization of Offloading Policies for Accuracy-Delay Tradeoffs in Hierarchical Inference

Hasan Burhan Beytur, Ahmet Gunhan Aydin, Gustavo de Veciana and Haris Vikalo (The University of Texas at Austin, USA)

0
We consider a hierarchical inference system with multiple clients connected to a server via a shared communication resource. When necessary, clients with low-accuracy machine learning models can offload classification tasks to a server for processing on a high-accuracy model. We propose a distributed online offloading algorithm which maximizes the accuracy subject to a shared resource utilization constraint thus indirectly realizing accuracy-delay tradeoffs possible given an underlying network scheduler. The proposed algorithm, named Lyapunov-EXP4, introduces a loss structure based on Lyapunov-drift minimization techniques to the bandits with expert advice framework. We prove that the algorithm converges to a near-optimal threshold policy on the confidence of the clients' local inference without prior knowledge of the system's statistics and efficiently solves a constrained bandit problem with sublinear regret. We further consider settings where clients may employ multiple thresholds, allowing more aggressive optimization of overall accuracy at a possible loss in fairness. Extensive simulation results on real and synthetic data demonstrate convergence of Lyapunov-EXP4, and show the accuracy-delay-fairness trade-offs achievable in such systems.
Speaker Hasan Burhan Beytur (The University of Texas at Austin)



Session Chair

Ningning Ding (Northwestern University, USA)

Enter Zoom
Session Lunch-3

Conference Lunch (for Registered Attendees)

Conference
12:00 PM — 1:30 PM PDT
Local
May 23 Thu, 2:00 PM — 3:30 PM CDT
Location
Georgia Ballroom and Plaza Ballroom (2nd Floor)

Enter Zoom
Session A-10

A-10: RF and Physical Layer

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 3:30 PM — 5:00 PM CDT
Location
Regency A

Cross-Shaped Separated Spatial-Temporal UNet Transformer for Accurate Channel Prediction

Hua Kang (Noah's Ark Lab, Huawei, Hong Kong); Qingyong Hu (Hong Kong University of Science and Technology, Hong Kong); Huangxun Chen (Hong Kong University of Science and Technology (Guangzhou), China); Qianyi Huang (Sun Yat-Sen University, China & Peng Cheng Laboratory, China); Qian Zhang (Hong Kong University of Science and Technology, Hong Kong); Min Cheng (Noah's Ark Lab, Huawei, Hong Kong)

0
Accurate channel estimation is crucial for the performance gains of massive multiple-input multiple-output (mMIMO) technologies. However, it is bandwidth-unfriendly to estimate large channel matrix frequently to combat the time-varying wireless channel. Deep learning-based channel prediction has emerged to exploit the temporal relationships between historical and future channels to address the bandwidth-accuracy trade-off. Existing methods with convolutional or recurrent neural networks suffer from their intrinsic limitations, including restricted receptive fields and propagation errors. Therefore, we propose a Transformer-based model, CS3T-UNet tailored for mMIMO channel prediction. Specifically, we combine the cross-shaped spatial attention with a group-wise temporal attention scheme to capture the dependencies across spatial and temporal domains, respectively, and introduce the shortcut paths to well-aggregate multi-resolution representations.
Thus, CS3T-UNet can globally capture the complex spatial-temporal relationship and predict multiple steps in parallel, which can meet the requirement of channel coherence time. Extensive experiments demonstrate that the prediction performance of CS3T-UNet surpasses the best baseline by at most 6.86 dB with a smaller computation cost on two channel conditions.
Speaker Hua KANG (Noah's Ark Lab, Huawei)

I graduated from HKUST in August, 2023 and am currently a researcher at Noah's Ark Lab, Huawei in Hong Kong. 

I'm actively working on topics at the intersection of IoT sensing, wireless communication and deep learning, with a focus on building ubiquitous, privacy-friendly and efficient machine learning systems for IoT applications. 


Diff-ADF: Differential Adjacent-dual-frame Radio Frequency Fingerprinting for LoRa Devices

Wei He, Wenjia Wu, Xiaolin Gu and Zichao Chen (Southeast University, China)

0
Nowadays, LoRa radio frequency fingerprinting has gained widespread attention due to its lightweight nature and difficulty in being forged. The existing fingerprint extraction methods are mainly divided into two categories: deep learning-based methods and feature engineering-based methods. Deep learning-based methods have poor robustness and require significant resource costs for model training. Although feature engineering-based methods can overcome these drawbacks, the feature it commonly uses, such as carrier frequency offset (CFO) and phase noise, lack sufficient discriminative power. Therefore, it is very challenging to design a radio frequency fingerprinting solution with high-accuracy and stable identification performance. Fortunately, we find that the differential phase noise between adjacent dual frames possesses excellent discriminative power and stability. Then, we design the corresponding radio frequency fingerprinting solution called Diff-ADF, which utilizes a classifier with differential phase noise as the primary feature, complemented by the use of CFO as an auxiliary feature. Finally, we implement the Diff-ADF and conduct experiments in real environments. Experimental results demonstrate that our proposed solution achieves an accuracy of over 90% on training and test data collected from different days, which is significantly superior to deep learning-based methods. Even in non-line-of-sight environments, our identification accuracy can still reach close to 85%.
Speaker Wei He (Southeast University)

Graduate student, School Of Cyber Science and Engineering, Southeast University


Cross-domain, Scalable, and Interpretable RF Device Fingerprinting

Tianya Zhao and Xuyu Wang (Florida International University, USA); Shiwen Mao (Auburn University, USA)

0
In this paper, we propose a cross-domain, scalable, and interpretable radio frequency (RF) fingerprinting system using a modified prototypical network (PTN) and an explanation-guided data augmentation across various domains and datasets with only a few samples. Specifically, a convolutional neural network is employed as the feature extractor of the PTN to extract RF fingerprint features. The predictions are made by comparing the similarity between prototypes and feature embedding vectors. To further improve the system performance, we design a customized loss function and deploy an eXplainable Artificial Intelligence (XAI) method to guide data augmentation during fine-tuning. To evaluate the effectiveness of our system in addressing domain shift and scalability problems, we conducted extensive experiments in both cross-domain and novel-device scenarios. Our study shows that our approach achieves exceptional performance in the cross-domain case, exhibiting an accuracy improvement of approximately 80\% compared to convolutional neural networks in the best case. Furthermore, our approach demonstrates promising results in the novel-device case across different datasets. Our customized loss function and XAI-guided data augmentation can further improve authentication accuracy to a certain degree.
Speaker Tianya Zhao (Florida International University)

Tianya Zhao is a second-year Ph.D. student studying computer science at FIU, supervised by Dr. Xuyu Wang. Prior to this, he received his Master's degree from Carnegie Mellon University and Bachelor's degree from Hunan University. In his current Ph.D. program, he is focusing on AIoT, AI Security, Wireless Sensing, and Smart Health.


PRISM: Pre-training RF Signals in Sparsity-aware Masked Autoencoders

Liang Fang, Ruiyuan Song, Zhi Lu, Dongheng Zhang, Yang Hu, Qibin Sun and Yan Chen (University of Science and Technology of China, China)

0
This paper introduces a novel paradigm for learning-based RF sensing, termed Pre-training RF signals In Sparsity-aware Masked autoencoders (PRISM), which shifts the RF sensing paradigm from supervised training on limited annotated datasets to unsupervised pre-training on large-scale unannotated datasets, followed by fine-tuning with a small annotated dataset. PRISM leverages a carefully designed sparsity-aware masking strategy to predict missing contents by masking a portion of RF signals, resulting in an efficient pre-training framework that significantly reduces computation and memory resources. This addresses the major challenges posed by large-scale and high-dimensional RF datasets, where memory consumption and computation speed are critical factors. We demonstrate PRISM's excellent generalization performance across diverse RF sensing tasks by evaluating it on three typical scenarios: human silhouette segmentation, 3D pose estimation, and gesture recognition, involving two general RF devices, radar and WiFi. The experimental results provide strong evidence for the effectiveness of PRISM as a robust learning-based solution for large-scale RF sensing applications.
Speaker
Speaker biography is not available.

Session Chair

Shiwen Mao (Auburn University, USA)

Enter Zoom
Session B-10

B-10: Network Verification and Tomography

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 3:30 PM — 5:00 PM CDT
Location
Regency B

Network Can Help Check Itself: Accelerating SMT-based Network Configuration Verification Using Network Domain Knowledge

Xing Fang (Xiamen University, China); Feiyan Ding (Xiamen, China); Bang Huang, Ziyi Wang, Gao Han, Rulan Yang, Lizhao You and Qiao Xiang (Xiamen University, China); Linghe Kong and Yutong Liu (Shanghai Jiao Tong University, China); Jiwu Shu (Xiamen University, China)

0
Satisfiability Modulo Theories (SMT) based network configuration verification tools are powerful tools in preventing network configuration errors. However, their fundamental limitation is efficiency, because they rely on generic SMT solvers to solve SMT problems, which are in general NP-complete. In this paper, we show that by leveraging network domain knowledge, we can substantially accelerate SMT-based network configuration verification. Our key insights are: given a network configuration verification formula, network domain knowledge can (1) guide the search of solutions to the formula by avoiding unnecessary search spaces; and (2) help simplify the formula, reducing the problem scale. We leverage these insights to design a new SMTbased network configuration verification tool called NetSMT. Extensive evaluation using real-world topologies and synthetic network configurations shows that NetSMT achieves orders of magnitude improvements compared to state-of-the-art methods.
Speaker
Speaker biography is not available.

P4Inv: Inferring Packet Invariants for Verification of Stateful P4 Programs

Delong Zhang, Chong Ye and Fei He (Tsinghua University, China)

0
P4 is widely adopted for programming data planes in software-defined networking. Formal verification of P4 programs is essential to ensure network reliability and security. However, existing P4 verifiers overlook the stateful nature of packet processing, rendering them inadequate for verifying complex stateful P4 programs.

In this paper, we introduce a novel concept called packet invariants to address the stateful aspects of P4 programs. We present an automated verification tool specifically designed for stateful P4 programs. This algorithm efficiently discovers and validates packet invariants in a data-driven manner, offering a novel and effective verification approach for stateful P4 programs. To the best of our knowledge, this approach represents the first attempt to generate and leverage domain-specific invariants for P4 program verification. We implement our approach in a prototype tool called P4Inv. Experimental results demonstrate its effectiveness in verifying stateful P4 programs.
Speaker Delong Zhang (Tsinghua University)

Graduate student of School of Software, Tsinghua University, engaged in the field of formal verification.


Routing-Oblivious Network Tomography with Flow-based Generative Model

Yan Qiao and Xinyu Yuan (Hefei University of Technology, China); Kui Wu (University of Victoria, Canada)

0
Given the high cost associated with directly measuring the traffic matrix (TM), researchers have dedicated decades to devising methods for estimating the complete TM from low-cost link loads by solving a set of heavily ill-posed linear equations. Today's increasingly intricate networks present an even greater challenge: the routing matrix within these equations can no longer be deemed reliable. To address this challenge, we, for the first time, employ a flow-based generative model for TM estimation problem by establishing an invertible correlation between TM and link loads, oblivious of the routing matrix. We demonstrate that the lost information within the ill-posed equations can be independently segregated from the TM. Our model collaboratively learns the invertible correlations between TM and link loads as well as the distribution of the lost information. As a result, our model can unbiasedly reverse-transform the link loads to the true TM. Our model has undergone extensive experiments on two real-world datasets. Surprisingly, even without knowledge of the routing matrix, it significantly outperforms six representative baselines in deterministic and noisy routing scenarios regarding estimation accuracy and distribution similarity. Particularly, if the actual routing matrix is absent, our model can improve the performance of the best baseline by 41%~58%.
Speaker
Speaker biography is not available.

VeriEdge: Verifying and Enforcing Service Level Agreements for Pervasive Edge Computing

Xiaojian Wang and Ruozhou Yu (North Carolina State University, USA); Dejun Yang (Colorado School of Mines, USA); Huayue Gu and Zhouyu Li (North Carolina State University, USA)

0
Edge computing gained popularity for its promises of low latency and high-quality computing services to users. However, it has also introduced the challenge of mutual untrust between user and edge devices for service level agreement (SLA) compliance. This obstacle hampers wide adoption of edge computing, especially in pervasive edge computing (PEC) where edge devices can freely enter or exit the market, which makes verifying and enforcing SLAs significantly more challenging. In this paper, we propose a framework for verifying and enforcing SLAs in PEC, allowing a user to assess SLA compliance of an edge service and ensure correctness of the service results. Our solution, called VeriEdge, employs a verifiable delayed sampling approach to sample a small number of computation steps, and relies on randomly selected verifiers to verify correctness of the computation results. To make sure the verification process is non-manipulable, we employ verifiable random functions to post-select the verifier(s). A dispute protocol is designed to resolve disputes for potential misbehavior. Rigorous security analysis demonstrates that VeriEdge achieves a high probability of detecting SLA violation with a minimal overhead. Experimental results indicate that VeriEdge is lightweight, practical, and efficient.
Speaker Ruozhou Yu, NC State University, USA

Ruozhou Yu is an Assistant Professor in Computer Science from the NC State University, USA. His research interests include edge computing, network security, blockchain, and quantum networks. He is a TPC member and an organizing committee member of INFOCOM 2024. He received the US NSF CAREER Award in 2021.


Session Chair

Kui Wu (University of Victoria, Canada)

Enter Zoom
Session C-10

C-10: Network Security and Privacy

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 3:30 PM — 5:00 PM CDT
Location
Regency C

Utility-Preserving Face Anonymization via Differentially Private Feature Operations

Chengqi Li, Sarah Simionescu, Wenbo He and Sanzheng Qiao (McMaster University, Canada); Nadjia Kara (École de Technologie Supérieure, Canada); Chamseddine Talhi (Ecole de Technologie Superieure, Canada)

0
Facial images play a crucial role in many web and security applications, but their uses come with notable privacy risks. Despite the availability of various face anonymization algorithms, they often fail to withstand advanced attacks while struggling to maintain utility for subsequent applications. We present two novel face anonymization algorithms that utilize feature operations to overcome these limitations. The first algorithm employs high-level feature matching, while the second incorporates additional low-level feature perturbation and regularization. These algorithms significantly enhance the utility of anonymized images while ensuring differential privacy. Additionally, we introduce a task-based benchmark to enable fair and comprehensive evaluations of privacy and utility across different algorithms. Through experiments, we demonstrate that our algorithms outperform others in preserving the utility of anonymized facial images in classification tasks while effectively protecting against a wide range of attacks.
Speaker
Speaker biography is not available.

Toward Accurate Butterfly Counting with Edge Privacy Preserving in Bipartite Networks

Mengyuan Wang, Hongbo Jiang, Peng Peng, Youhuan Li and Wenbin Huang (Hunan University, China)

0
Butterfly counting is widely used to analyze bipartite networks, but counting butterflies in original bipartite networks can reveal sensitive data and pose a risk of individual privacy, specifically edge privacy. Current privacy notions do not fully address the needs of both user-user and user-item bipartite networks. In this paper, we propose a novel privacy notion, edge decentralized differential privacy (edge DDP), which preserves edge privacy in any bipartite network. We also design the randomized edge protocol (REP) to perturb real edges in bipartite networks. However, a significant amount of noise in perturbed bipartite networks often leads to an overcount of butterflies. To achieve accurate butterfly counting, we design the randomized group protocol (RGP) to reduce noise. By combining REP and RGP, we propose a two-phase framework called butterfly counting in limitedly synthesized bipartite networks (BC-LimBN) to synthesize networks for accurate butterfly counting. BC-LimBN has been rigorously proven to satisfy edge DDP. Our experiments on various datasets confirm the high accuracy of BC-LimBN in butterfly counting and its superiority over competitors, with a mean relative error of less than 10\% at most. Furthermore, our experiments show that BC-LimBN has a low time cost, requiring only a few seconds on our datasets.
Speaker
Speaker biography is not available.

Efficient and Effective In-Vehicle Intrusion Detection System using Binarized Convolutional Neural Network

Linxi Zhang (Central Michigan University, USA); Xuke Yan (Oakland University, USA); Di Ma (University of Michigan-Dearborn, USA)

0
Modern vehicles are equipped with multiple Electronic Control Units (ECUs) communicating over in-vehicle networks such as Controller Area Network (CAN). Inherent security limitations in CAN necessitate the use of Intrusion Detection Systems (IDSs) for protection against potential threats. While some IDSs leverage advanced deep learning to improve accuracy, issues such as long processing time and large memory size remain. Existing Binarized Neural Network (BNN)-based IDSs, proposed as a solution for efficiency, often compromise on accuracy. To this end, we introduce a novel Binarized Convolutional Neural Network (BCNN)-based IDS, designed to exploit the temporal and spatial characteristics of CAN messages to achieve both efficiency and detection accuracy. In particular, our approach includes a novel input generator capturing temporal and spatial correlations of messages, aiding model learning and ensuring high-accuracy performance. Experimental results suggest our IDS effectively reduces memory utilization and detection latency while maintaining high detection rates. Our IDS runs 4 times faster and utilizes only 3.3% of the memory space required by a full-precision CNN-based IDS. Meanwhile, our proposed system demonstrates a detection accuracy between 94.19% and 96.82% relative to the CNN-based IDS across different attack scenarios. This performance marks a noteworthy improvement over existing state-of-the-art BNN-based IDS designs.
Speaker
Speaker biography is not available.

5G-WAVE: A Core Network Framework with Decentralized Authorization for Network Slices

Pragya Sharma and Tolga O Atalay (Virginia Tech, USA); Hans-Andrew Gibbs and Dragoslav Stojadinovic (Kryptowire LLC, USA); Angelos Stavrou (Virginia Tech & Kryptowire, USA); Haining Wang (Virginia Tech, USA)

0
5G mobile networks leverage Network Function Virtualization (NFV) to offer services in the form of network slices. Each network slice is a logically isolated fragment constructed by service chaining a set of Virtual Network Functions (VNFs). The Network Repository Function (NRF) acts as a central OpenAuthorization (OAuth) 2.0 server to secure inter-VNF communications resulting in a single point of failure. Thus, we propose 5G-WAVE, a decentralized authorization framework for the 5G core by leveraging the WAVE framework and integrating it into the OpenAirInterface (OAI) 5G core. Our design relies on Side-Car Proxies (SCPs) deployed alongside individual VNFs, allowing point-to-point authorization. Each SCP acts as a WAVE engine to create entities and attestations and verify incoming service requests. We measure the authorization latency overhead for VNF registration, 5G Authentication and Key Agreement (AKA), and data session setup and observe that WAVE verification introduces 155ms overhead to HTTP transactions for decentralizing authorization. Additionally, we evaluate the scalability of 5G-WAVE by instantiating more network slices to observe 1.4x increase in latency with 10x growth in network size. We also discuss how 5G-WAVE can significantly reduce the 5G attack surface without using OAuth 2.0 while addressing several key issues of 5G standardization.
Speaker
Speaker biography is not available.

Session Chair

Rui Zhang (University of Delaware, USA)

Enter Zoom
Session D-10

D-10: High Speed Networking

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 3:30 PM — 5:00 PM CDT
Location
Regency D

Transparent Broadband VPN Gateway: Achieving 0.39 Tbps per Tunnel with Bump-in-the-Wire

Kenji Tanaka (NTT, Japan); Takashi Uchida and Yuki Matsuda (Fixstars, Japan); Yuki Arikawa (NTT, Japan); Shinya Kaji (Fixstars, Japan); Takeshi Sakamoto (NTT, Japan)

0
The demand for virtual private networks (VPNs) that provide confidentiality, integrity, and authenticity of communications is growing every year. IPsec is one of the oldest and most widely used VPN protocols, implemented between the internet protocol (IP) layer and the data link layer of the Linux kernel. This implementation method, known as bump-in-the-stack, has the advantage of being able to transparently apply IPsec to traffic without changing the application. However, its throughput efficiency (Gbps/core) is worse than regular Linux communication. Therefore, we chose the bump-in-the-wire (BITW) architecture, which handles IPsec in hardware separate from the host. Our proposed BITW architecture consists of inline cryptographic accelerators implemented in field-programmable gate arrays and a programmable switch that connects multiple such accelerators. The VPN gateway implemented with our architecture is transparent and improves throughput efficiency by 3.51 times and power efficiency by 3.40 times over a VPN gateway implemented in the Linux kernel. It also demonstrates excellent scalability, and has been confirmed to scale to a maximum of 386.24 Gbps per tunnel, exceeding state-of-the-art technology in maximum throughput and efficiency per tunnel. In multiple tunnels use cases, the proposed architecture improves the energy efficiency by 2.49 times.
Speaker
Speaker biography is not available.

Non-invasive performance prediction of high-speed softwarized network services with limited knowledge

Qiong Liu (Telecom Paris, Institute Polytechnique de Paris, France); Tianzhu Zhang (Nokia Bell Labs, France); Leonardo Linguaglossa (Telecom Paris, France)

0
Modern telco networks have experienced a significant paradigm shift in the past decade, thanks to the proliferation of network softwarization. Despite the benefits of softwarized networks, the constituent software data planes cannot always guarantee predictable performance due to resource contentions in the underlying shared infrastructure. Performance predictions are thus paramount for network operators to fulfilling Service-Level Agreements (SLAs), especially in high-speed regimes (e.g., Gigabit or Terabit Ethernet). Existing solutions heavily rely on in-band feature collection, which imposes non-trivial engineering and data-path overhead.

This paper proposes a non-invasive approach to data-plane performance prediction: our framework complements state-of-the-art solutions by measuring and analyzing low-level features ubiquitously available in the network infrastructure. Accessing these features does not hamper the packet data path. Our approach does not rely on prior knowledge of the input traffic, VNFs' internals, and system details.
We show that (i) low-level hardware features exposed by the NFV infrastructure can be collected and interpreted for performance issues, (ii) predictive models can be derived with classical ML algorithms, (iii) and can be used to predict performance impairments in real NFV systems accurately. Our code and datasets are publicly available.
Speaker
Speaker biography is not available.

BurstDetector: Real-Time and Accurate Across-Period Burst Detection in High-Speed Networks

Zhongyi Cheng, Guoju Gao, He Huang, Yu-e Sun and Yang Du (Soochow University, China); Haibo Wang (University of Kentucky, USA)

0
Traffic measurement provides essential information for various network services. Burst is a common phenomenon in high-speed network streams, which manifests as a surge in the number of a flow's incoming packets. We propose a new definition named across-period burst, considering the change not in two adjacent time windows but in two groups of windows with time continuity. The across-period burst definition can better capture the continuous changes of flows in high-speed networks. To achieve real-time burst detection with high accuracy and low memory consumption, we propose a novel sketch named BurstDetector, which consists of two stages. Stage 1 is to exclude those flows that will not become burst flows, while Stage 2 accurately records the information of the potential burst flows and carries out across-period burst detections at the end of every time window. We further propose an optimization called Hierarchical Cell, which can improve the memory utilization of BurstDetector. In addition, we analyze the estimation accuracy and time complexity of BurstDetector. Extensive experiments based on real-world datasets show that our BurstDetector can achieve at least 2.8 times as much detection accuracy and processing throughput as some existing algorithms.
Speaker
Speaker biography is not available.

NetFEC: In-network FEC Encoding Acceleration for Latency-sensitive Multimedia Applications

Yi Qiao, Han Zhang and Jilong Wang (Tsinghua University, China)

0
In face of packet loss, latency-sensitive multimedia applications cannot afford re-transmission because loss detection and re-transmission could lead to extra latency or otherwise compromised media quality. Alternatively, forward error correction (FEC) ensures reliability by adding redundancy and it is able to achieve lower latency at the cost of bandwidth and computational overheads. We propose to re-locate FEC encoding to hardware that better suits the computational pattern of FEC encoding than CPUs. In this paper, we present NetFEC, an in-network acceleration system that offloads the entire FEC encoding process on the emergent programmable switching ASICs, eliminating all CPU involvement. We design the ghost packet mechanism so that NetFEC can be compatible with important media transport functionalities, including congestion control, pacing and statistics. We integrate NetFEC with WebRTC and conduct extensive experiments with real hardwares. Our evaluations demonstrate that NetFEC is able to relieve server CPU burden and adds negligible overheads.
Speaker Yi Qiao

Yi Qiao received his B.S. degree of Computer Science and Technology from Tsinghua University. He is now a Ph.D. candidate at Institute of Network Science and Cyberspace, Tsinghua University. His research focuses on software-defined networking, network function virtualization and cyber security.


Session Chair

Baochun Li (University of Toronto, Canada)

Enter Zoom
Session E-10

E-10: Machine Learning 4

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 3:30 PM — 5:00 PM CDT
Location
Regency E

Augment Online Linear Optimization with Arbitrarily Bad Machine-Learned Predictions

Dacheng Wen (The University of Hong Kong, Hong Kong); Yupeng Li (Hong Kong Baptist University, Hong Kong); Francis C.M. Lau (The University of Hong Kong, Hong Kong)

0
The online linear optimization paradigm is important to many real-world network applications as well as theoretical algorithmic studies. Recent studies have made attempts to augment online linear optimization with machine-learned predictions of the cost function that are meant to improve the performance of the learner. However, they fail to address the possible realistic case where the predictions can be arbitrarily bad. In this work, we take the first step to study the problem of online linear optimization with a dynamic number of arbitrarily bad machine-learned predictions per round and propose an algorithm termed OLOAP. Our theoretical analysis shows that, when the qualities of the predictions are satisfactory, OLOAP achieves a regret bound of O(log T), which circumvents the tight lower bound of Ω( √ T) for the vanilla problem of online linear optimization (i.e., the one without any predictions). Meanwhile, the regret of our algorithm is never worse than O( √ T) irrespective of the qualities of predictions. In addition, we further derive a lower bound for the regret of the studied problem, which demonstrates that OLOAP is near-optimal. We consider two important network applications and conduct extensive evaluations. Our results validate the superiority of our algorithm over state-of-the-art approaches.
Speaker
Speaker biography is not available.

Dancing with Shackles, Meet the Challenge of Industrial Adaptive Streaming via Offline Reinforcement Learning

Lianchen Jia (Tsinghua University, China); Chao Zhou (Beijing Kuaishou Technology Co., Ltd, China); Tianchi Huang, Chaoyang Li and Lifeng Sun (Tsinghua University, China)

0
Adaptive video streaming has been studied for over 10 years and has demonstrated remarkable performance. However, adaptive video streaming is not an independent algorithm but relies on other components of the video system. Consequently, as other components undergo optimization, the gap between the traditional simulator and the real-world system continues to grow which makes the adaptive video streaming algorithm must adapt to these variations. In order to address the challenges facing industrial adaptive video streaming, we introduce a novel offline reinforcement learning framework called Backwave. This framework leverages history logs to reduce the sim-real gap. We propose new metrics based on counterfactual reasoning to evaluate its performance and we integrate expert knowledge to generate valuable data to mitigate the issue of data override. Furthermore, we employ curriculum learning to minimize additional errors. We deployed Backwave on a mainstream commercial short video platform, Kuaishou. In a series of A/B tests conducted nearly one month with over 400M daily watch times, Backwave consistently outperforms prior algorithms. Specifically, Backwave reduces stall time by 0.45% to 8.52% while maintaining comparable video quality and Backwave demonstrates improvements in average play duration by 0.12% to 0.16%, and overall play duration by 0.12% to 0.26%.
Speaker
Speaker biography is not available.

GraphProxy: Communication-Efficient Federated Graph Learning with Adaptive Proxy

Junyang Wang, Lan Zhang, Junhao Wang, Mu Yuan and Yihang Cheng (University of Science and Technology of China, China); Qian Xu (BestPay Co.,Ltd,China Telecom, China); Bo Yu (Bestpay Co., Ltd, China Telecom, China)

0
Federated graph learning (FGL) enables multiple participants with distributed but connected graph data to collaboratively train a model in a privacy-preserving way. However, the high communication cost hinder the adoption of FGL in many resource-limited or delay-sensitive applications. In this work, we focus on reducing the communication cost incurred by the transmission of neighborhood information in FGL. We propose to search for local proxies that can play a substitute role as the external neighbors, and develop a novel federated graph learning framework named GraphProxy. GraphProxy utilizes representation similarity and class correlation to select local proxies for external neighbors. And we propose to dynamically adjust the proxy strategy according to the changing representation of nodes during the iterative training process. We also perform a theoretical analysis and show that using a proxy node has a similar influence on training when it is sufficiently similar to the external one. Extensive evaluations show the effectiveness of our design, e.g., GraphProxy can achieve 8 times communication efficiency with only 0.14% performance degradation.
Speaker
Speaker biography is not available.

Learning Context-Aware Probabilistic Maximum Coverage Bandits: A Variance-Adaptive Approach

Xutong Liu (The Chinese University of Hong Kong, Hong Kong); Jinhang Zuo (University of Massachusetts Amherst & California Institute of Technology, USA); Junkai Wang (Fudan University, China); Zhiyong Wang (The Chinese University of Hong Kong, Hong Kong); Yuedong Xu (Fudan University, China); John Chi Shing Lui (Chinese University of Hong Kong, Hong Kong)

0
Probabilistic maximum coverage (PMC) is an important framework that can model many network applications, including mobile crowdsensing, content delivery, and task replication. In PMC, an operator chooses nodes in a graph that can probabilistically cover other nodes, aiming to maximize the total rewards from the covered nodes. To tackle the challenge of unknown parameters in network environments, PMC are studied under the online learning context, i.e., the PMC bandit. However, existing PMC bandits lack context-awareness and fail to exploit valuable contextual information, limiting their efficiency and adaptability in dynamic environments. To address this limitation, we propose a novel context-aware PMC bandit model (C-PMC). C-PMC employs a linear structure to model the mean outcome of each arm, efficiently incorporating contextual features and enhancing its applicability to large-scale network systems. Then we design a variance-adaptive contextual combinatorial upper confidence bound algorithm (VAC2UCB), which utilizes second-order statistics, specifically variance, to re-weight feedback data and estimate unknown parameters. Our theoretical analysis shows that C-PMC achieves a regret of \(\tilde{O}(d\sqrt{VT})\), independent of the number of edges E and action size K. Finally, we conduct experiments on synthetic and real-world datasets, showing the superior performance of VAC2UCB in context-aware mobile crowdsensing and user-targeted content delivery applications.
Speaker
Speaker biography is not available.

Session Chair

Walter Willinger (NIKSUN, USA)

Enter Zoom
Session F-10

F-10: Spectrum Access and Sensing

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 3:30 PM — 5:00 PM CDT
Location
Regency F

Effi-Ace: Efficient and Accurate Prediction for High-Resolution Spectrum Tenancy

Rui Zou (North Carolina State University, USA); Wenye Wang (NC State University, USA)

0
Spectrum prediction is a key enabler for the forthcoming coexistence paradigm where various Radio Access Technologies share overlapping radio spectrum, to substantially improve spectrum efficiency in 5G and beyond systems. Though this fundamental issue has received tremendous research attention, existing algorithms are designed for and validated against spectrum usage data in low time-frequency granularities, which cause inevitable errors when applied to spectrum prediction in realistic resolutions. Therefore, in this paper, we improve three key components along the pipeline of spectrum prediction. First, we achieve raw spectrum data in the same resolutions as scheduling, which reflect the actual dynamics of the subject to be predicted. We improve the Deep Q-Network (DQN) prediction algorithm with enhanced experience replay to reduce the sample complexity, so that the improved DQN is more efficient in terms of sample quantities. New prediction features are extracted from high resolution measurement data to improve prediction accuracy. According to our thorough experiments, the proposed prediction algorithm substantially reduces the sample complexity by 88.9%, and the prediction accuracy improvements are up to 14%, when compared with various state-of-the-art counterparts.
Speaker
Speaker biography is not available.

Scalable Network Tomography for Dynamic Spectr