IEEE INFOCOM 2024

Session C-8

C-8: Staleness and Age of Information (AoI)

Conference
8:30 AM — 10:00 AM PDT
Local
May 23 Thu, 11:30 AM — 1:00 PM EDT
Location
Regency C

An Analytical Approach for Minimizing the Age of Information in a Practical CSMA Network

Suyang Wang, Oluwaseun Ajayi and Yu Cheng (Illinois Institute of Technology, USA)

0
Age of information (AoI) is a crucial metric in modern communication systems, quantifying information freshness at its destination. This study proposes a novel and general approach utilizing stochastic hybrid systems (SHS) for AoI analysis and minimization in carrier sense multiple access (CSMA) networks. Specifically, we consider a practical yet general networking scenario where multiple nodes contend for transmission through a standard CSMA-based medium access control (MAC) protocol, and the tagged node under consideration uses a small transmission buffer for small AoI. We for the first time develop an SHS-based analytical model for this finite-buffer transmission system over the CSMA MAC. Moreover, we develop a creative method to incorporate the collision probability into the SHS model, with background nodes having heterogeneous traffic arrival rates. Our model enables us to analytically find the optimal sampling rate to minimize the AoI of the tagged node in a wide range of practical networking scenarios. Our analysis reveals insights into buffer size impacts when jointly optimizing throughput and AoI. The SHS model is cast over an 802.11-based MAC to examine the performance, with comparison to ns-based simulation results. The accuracy of the modeling and the efficiency of optimal sampling are convincingly demonstrated.
Speaker
Speaker biography is not available.

Reducing Staleness and Communication Waiting via Grouping-based Synchronization for Distributed Deep Learning

Yijun Li, Jiawei Huang, Zhaoyi Li, Jingling Liu, Shengwen Zhou, Wanchun Jiang and Jianxin Wang (Central South University, China)

0
Distributed deep learning has been widely employed to train deep neural network over large-scale dataset. However, the commonly used parameter server architecture suffers from long synchronization time in data-parallel training. Although the existing solutions are proposed to reduce synchronization overhead by breaking the synchronization barriers or limiting the staleness bound, they inevitably experience low convergence efficiency and long synchronization waiting. To address these problems, we propose Gsyn to reduce both synchronization overhead and staleness. Specifically, Gsyn divides workers into multiple groups. The workers in the same group coordinate with each other using the bulk synchronous parallel scheme to achieve high convergence efficiency, and each group communicates with parameter server asynchronously to reduce the synchronization waiting time, consequently increasing the convergence efficiency. Furthermore, we theoretically analyze the optimal number of groups to achieve a good tradeoff between staleness and synchronization waiting. The evaluation test in the realistic cluster with multiple training tasks demonstrates that Gsyn is beneficial and accelerates distributed training by up to 27% over the state-of-the-art solutions.
Speaker Yijun Li (Central South University)



An Easier-to-Verify Sufficient Condition for Whittle Indexability and Application to AoI Minimization

Sixiang Zhou (Purdue University, West Lafayette, USA); Xiaojun Lin (The Chinese University of Hong Kong, Hong Kong & Purdue University, West Lafayette (on Leave), USA)

0
We study a scheduling problem for a Base Station transmitting status information to multiple User Equipments (UE) with the goal of minimizing the total expected Age-of-Information (AoI). Such a problem can be formulated as a Restless Multi-Arm Bandit (RMAB) problem and solved asymptotically-optimally by a low-complexity Whittle index policy, if each UE's sub-problem is Whittle indexable. However, proving Whittle indexability can be highly non-trivial, especially when the value function cannot be derived in closed-form. In particular, this is the case for the AoI minimization problem with stochastic arrivals and unreliable channels, whose Whittle indexability remains an open problem. To overcome this difficulty, we develop a sufficient condition for Whittle indexability based on the notion of active time (AT). Even though the AT condition shares considerable similarity to the Partial Conservation Law (PCL) condition, it is much easier to understand and verify. We then apply our AT condition to the stochastic-arrival unreliable-channel AoI minimization problem and, for the first time in the literature, prove its Whittle indexability. Our proof uses a novel coupling approach to verify the AT condition, which may also be of independent interest to other large-scale RMAB problems.
Speaker
Speaker biography is not available.

Joint Optimization of Model Deployment for Freshness-Sensitive Task Assignment in Edge Intelligence

Haolin Liu and Sirui Liu (Xiangtan University, China); Saiqin Long (Jinan University, China); Qingyong Deng (Guangxi Normal University, China); Zhetao Li (Jinan University, China)

1
Edge Intelligence aims to push deep learning (DL) services to network edge to reduce response time and protect privacy. In implementations, proximity deployment of DL models and timely updates can improve the quality of experience (QoE) for users, but increase the operation cost as well as pose a challenge for task assignment. To address the challenge, a joint online optimization problem for DL model deployment (including placement and update) and freshness-sensitive task assignment is formulated to improve QoE and application service provider (ASP) profit. In the problem, we introduce the age of information (AOI) to quantify the freshness of the DL model and represent user QoE as an AOI based utility function. To solve the problem, an online model placement, update, and task assignment (MPUTA) algorithm is proposed. It first converts the time-slot coupled problem into a single time-slot problem using regularization technique, and decomposes the single time-slot problem into model deployment and task assignment subproblems. Then, using randomized round technique to deal with the model deployment subproblem and the graph matching technique to solve the task assignment subproblem. In simulation experiments, MPUTA is shown to outperform other benchmark algorithms in terms of both user QoE and ASP profit.
Speaker Sirui Liu(Xiangtan University)

SiRui Liu received the B.Eng degree from WuHan Polytechnic  University, China, in 2021,and is currently pursuing the Master's degree in Computer Technology at Xiangtan University in China. His research interests include edge intelligence and dynamic deep learning model deployment.


Session Chair

Hongwei Zhang (Iowa State University, USA)

Enter Zoom
Session C-9

C-9: Internet of Things (IoT) Networks

Conference
10:30 AM — 12:00 PM PDT
Local
May 23 Thu, 1:30 PM — 3:00 PM EDT
Location
Regency C

DTMM: Deploying TinyML Models on Extremely Weak IoT Devices with Pruning

Lixiang Han, Zhen Xiao and Zhenjiang Li (City University of Hong Kong, Hong Kong)

0
DTMM is a library designed for efficient deployment and execution of machine learning models on weak IoT devices such as microcontroller units (MCUs). The motivation for designing DTMM comes from the emerging field of tiny machine learning (TinyML), which explores extending the reach of machine learning to many low-end IoT devices to achieve ubiquitous intelligence. Due to the weak capability of embedded devices, it is necessary to compress models by pruning enough weights before deploying. Although pruning has been studied extensively on many computing platforms, two key issues with pruning methods are exacerbated on MCUs: models need to be deeply compressed without significantly compromising accuracy, and they should perform efficiently after pruning. Current solutions only achieve one of these objectives, but not both. In this paper, we find that pruned models have great potential for efficient deployment and execution on MCUs. Therefore, we propose DTMM with pruning unit selection, pre-execution pruning optimizations, runtime acceleration, and post-execution low-cost storage to fill the gap for efficient deployment and execution of pruned models. It can be integrated into commercial ML frameworks for practical deployment, and a prototype system has been developed. Extensive experiments on various models show promising gains compared to state-of-the-art methods.
Speaker
Speaker biography is not available.

Memory-Efficient and Secure DNN Inference on TrustZone-enabled Consumer IoT Devices

Xueshuo Xie (Haihe Lab of ITAI, China); Haoxu Wang, Zhaolong Jian and Li Tao (Nankai University, China); Wei Wang (Beijing Jiaotong University, China); Zhiwei Xu (Haihe Lab of ITAI); Guiling Wang (New Jersey Institute of Technology, USA)

0
Edge intelligence enables resource-demanding DNN inference without transferring original data, addressing concerns about data privacy in consumer IoT devices. For privacy-sensitive applications, deploying models in hardware-isolated trusted execution environments (TEEs) becomes essential. However, the limited secure memory in TEEs poses challenges for deploying DNN inference, and alternative techniques like model partitioning and offloading introduce performance degradation and security issues. In this paper, we present a novel approach for advanced model deployment in TrustZone that ensures comprehensive privacy preservation during model inference. We design a memory-efficient management methods to support memory-demanding inference in TEEs. By adjusting the memory priority, we effectively mitigate memory leakage risks and memory overlap conflicts, resulting in just 32 lines of code alterations in the trusted operating system. Additionally, we leverage two tiny libraries: S-Tinylib (2,538 LoCs), a tiny deep learning library, and Tinylibm (827 LoCs), a tiny math library, to support efficient inference in TEEs. We implemented a prototype on Raspberry Pi 3B+ and evaluated it using three well-known lightweight DNN models. The experimental results demonstrate that our design significantly improves inference speed by 3.13 times and reduces power consumption by over 66.5% compared to non-memory optimization methods in TEEs.
Speaker Haoxu Wang (Nankai University)

Haoxu Wang received his B.S. degree in computer science and technology from Shandong University in 2021. He is currently working toward his MA.SC degree in College of Computer Science, Nankai University. His main research interests include Trusted Execution Environment, Internet of Things, machine learning, and edge computing.


VisFlow: Adaptive Content-Aware Video Analytics on Collaborative Cameras

Yuting Yan, Sheng Zhang, Xiaokun Wang, Ning Chen and Yu Chen (Nanjing University, China); Yu Liang (Nanjing Normal University, China); Mingjun Xiao (University of Science and Technology of China, China); Sanglu Lu (Nanjing University, China)

0
There is an increasing demand for the query of live surveillance video streams via large-scale camera networks, such as for applications in public safety and intelligent cities. To deal with the conflict between computing-intensive detection models and limited resources on cameras, a detection-with-tracking framework has gained prominence. Nevertheless, due to the susceptibility of trackers to occlusions and new object appearances, frequent detections are required to calibrate the results, leading to varying detection demands dependent on video content. Consequently, we propose a mechanism for content-aware analytics on collaborative cameras, denoted as VisFlow, to increase the quality of detections and achieve the latency requirement by fully utilizing camera resources. We formulate such a problem as a non-linear, integer program in long-term scope, to maximize the detection accuracy. An online mechanism, underpinned by a queue-based algorithm and randomized rounding, is then devised to dynamically orchestrate detection workloads among cameras, thus adapting to fluctuating detection demands. Through rigorous proof, both dynamic regret regarding overall accuracy, and the transmission budget are ensured in the long run. The testbed experiments on Jetson Kits demonstrate that VisFlow improves accuracy by 18.3% over the alternatives.
Speaker
Speaker biography is not available.

SAMBA: Detecting SSL/TLS API Misuses in IoT Binary Applications

Kaizheng Liu, Ming Yang and Zhen Ling (Southeast University, China); Yuan Zhang (Fudan University, China); Chongqing Lei (Southeast University, China); Lan Luo (Anhui University of Technology, China); Xinwen Fu (University of Massachusetts Lowell, USA)

0
IoT devices are increasingly adopting Secure Socket Layer (SSL) and Transport Layer Security (TLS) protocols. However, the misuse of SSL/TLS libraries still threatens the communication. Existing tools for detecting SSL/TLS API misuses primarily rely on source code analysis while IoT applications are usually released as binaries with no source code. This paper presents SAMBA, a novel tool to automatically detect SSL/TLS API misuses in IoT binaries through static analysis. To overcome the path explosion problem and deal with various SSL/TLS implementations, we introduce a three-level reduction method to construct the SSL/TLS API-centric graph (SAG), which has a much smaller size compared with the conventional interprocedural control flow graph. We propose a formal expression of API misuse signatures, which is capable of capturing different types of misuse, particularly those in the SSL/TLS connection establishment process. We successfully analyze 115 IoT binaries and find that 94 of them have the vulnerability of insecure certificate verification and 112 support deprecated SSL/TLS protocols. SAMBA is the first IoT binary analysis system for detecting SSL/TLS API misuses.
Speaker Kaizheng Liu (Southeast University)



Session Chair

Zhangyu Guan (University at Buffalo, USA)

Enter Zoom
Session C-10

C-10: Network Security and Privacy

Conference
1:30 PM — 3:00 PM PDT
Local
May 23 Thu, 4:30 PM — 6:00 PM EDT
Location
Regency C

Utility-Preserving Face Anonymization via Differentially Private Feature Operations

Chengqi Li, Sarah Simionescu, Wenbo He and Sanzheng Qiao (McMaster University, Canada); Nadjia Kara (École de Technologie Supérieure, Canada); Chamseddine Talhi (Ecole de Technologie Superieure, Canada)

0
Facial images play a crucial role in many web and security applications, but their uses come with notable privacy risks. Despite the availability of various face anonymization algorithms, they often fail to withstand advanced attacks while struggling to maintain utility for subsequent applications. We present two novel face anonymization algorithms that utilize feature operations to overcome these limitations. The first algorithm employs high-level feature matching, while the second incorporates additional low-level feature perturbation and regularization. These algorithms significantly enhance the utility of anonymized images while ensuring differential privacy. Additionally, we introduce a task-based benchmark to enable fair and comprehensive evaluations of privacy and utility across different algorithms. Through experiments, we demonstrate that our algorithms outperform others in preserving the utility of anonymized facial images in classification tasks while effectively protecting against a wide range of attacks.
Speaker
Speaker biography is not available.

Toward Accurate Butterfly Counting with Edge Privacy Preserving in Bipartite Networks

Mengyuan Wang, Hongbo Jiang, Peng Peng, Youhuan Li and Wenbin Huang (Hunan University, China)

0
Butterfly counting is widely used to analyze bipartite networks, but counting butterflies in original bipartite networks can reveal sensitive data and pose a risk of individual privacy, specifically edge privacy. Current privacy notions do not fully address the needs of both user-user and user-item bipartite networks. In this paper, we propose a novel privacy notion, edge decentralized differential privacy (edge DDP), which preserves edge privacy in any bipartite network. We also design the randomized edge protocol (REP) to perturb real edges in bipartite networks. However, a significant amount of noise in perturbed bipartite networks often leads to an overcount of butterflies. To achieve accurate butterfly counting, we design the randomized group protocol (RGP) to reduce noise. By combining REP and RGP, we propose a two-phase framework called butterfly counting in limitedly synthesized bipartite networks (BC-LimBN) to synthesize networks for accurate butterfly counting. BC-LimBN has been rigorously proven to satisfy edge DDP. Our experiments on various datasets confirm the high accuracy of BC-LimBN in butterfly counting and its superiority over competitors, with a mean relative error of less than 10\% at most. Furthermore, our experiments show that BC-LimBN has a low time cost, requiring only a few seconds on our datasets.
Speaker
Speaker biography is not available.

Efficient and Effective In-Vehicle Intrusion Detection System using Binarized Convolutional Neural Network

Linxi Zhang (Central Michigan University, USA); Xuke Yan (Oakland University, USA); Di Ma (University of Michigan-Dearborn, USA)

0
Modern vehicles are equipped with multiple Electronic Control Units (ECUs) communicating over in-vehicle networks such as Controller Area Network (CAN). Inherent security limitations in CAN necessitate the use of Intrusion Detection Systems (IDSs) for protection against potential threats. While some IDSs leverage advanced deep learning to improve accuracy, issues such as long processing time and large memory size remain. Existing Binarized Neural Network (BNN)-based IDSs, proposed as a solution for efficiency, often compromise on accuracy. To this end, we introduce a novel Binarized Convolutional Neural Network (BCNN)-based IDS, designed to exploit the temporal and spatial characteristics of CAN messages to achieve both efficiency and detection accuracy. In particular, our approach includes a novel input generator capturing temporal and spatial correlations of messages, aiding model learning and ensuring high-accuracy performance. Experimental results suggest our IDS effectively reduces memory utilization and detection latency while maintaining high detection rates. Our IDS runs 4 times faster and utilizes only 3.3% of the memory space required by a full-precision CNN-based IDS. Meanwhile, our proposed system demonstrates a detection accuracy between 94.19% and 96.82% relative to the CNN-based IDS across different attack scenarios. This performance marks a noteworthy improvement over existing state-of-the-art BNN-based IDS designs.
Speaker
Speaker biography is not available.

5G-WAVE: A Core Network Framework with Decentralized Authorization for Network Slices

Pragya Sharma and Tolga O Atalay (Virginia Tech, USA); Hans-Andrew Gibbs and Dragoslav Stojadinovic (Kryptowire LLC, USA); Angelos Stavrou (Virginia Tech & Kryptowire, USA); Haining Wang (Virginia Tech, USA)

0
5G mobile networks leverage Network Function Virtualization (NFV) to offer services in the form of network slices. Each network slice is a logically isolated fragment constructed by service chaining a set of Virtual Network Functions (VNFs). The Network Repository Function (NRF) acts as a central OpenAuthorization (OAuth) 2.0 server to secure inter-VNF communications resulting in a single point of failure. Thus, we propose 5G-WAVE, a decentralized authorization framework for the 5G core by leveraging the WAVE framework and integrating it into the OpenAirInterface (OAI) 5G core. Our design relies on Side-Car Proxies (SCPs) deployed alongside individual VNFs, allowing point-to-point authorization. Each SCP acts as a WAVE engine to create entities and attestations and verify incoming service requests. We measure the authorization latency overhead for VNF registration, 5G Authentication and Key Agreement (AKA), and data session setup and observe that WAVE verification introduces 155ms overhead to HTTP transactions for decentralizing authorization. Additionally, we evaluate the scalability of 5G-WAVE by instantiating more network slices to observe 1.4x increase in latency with 10x growth in network size. We also discuss how 5G-WAVE can significantly reduce the 5G attack surface without using OAuth 2.0 while addressing several key issues of 5G standardization.
Speaker
Speaker biography is not available.

Session Chair

Rui Zhang (University of Delaware, USA)

Enter Zoom
Session C-11

C-11: User experience, Orchestration, and Telemetry

Conference
3:30 PM — 5:00 PM PDT
Local
May 23 Thu, 6:30 PM — 8:00 PM EDT
Location
Regency C

Vulture: Cross-Device Web Experience with Fine-Grained Graphical User Interface Distribution

Seonghoon Park and Jeho Lee (Yonsei University, Korea (South)); Yonghun Choi (Korea Institute of Science and Technology (KIST), Korea (South)); Hojung Cha (Yonsei University, Korea (South))

0
We propose a cross-device web solution, called Vulture, which distributes graphical user interface (GUI) elements of apps across multiple devices without requiring modifications of web apps or browsers. Several challenges should be resolved to achieve the goals. First, the server-peer configuration should be efficiently established to distribute web resources in cross-device web environments. Vulture exploits an in-browser virtual proxy that runs the web server's functionality in web browsers using a virtual HTTP scheme and a relevant API. Second, the functional consistency of web apps must be ensured in GUI-distributed environments. Vulture solves this challenge by providing a single-browser illusion with a two-tier document object models (DOM) architecture, which handles view state changes and user input seamlessly in cross-device environments. We implemented Vulture and extensively evaluated the system under various combinations of operating platforms, devices, and network capabilities while running 50 real web apps. The experiment results show that the proposed scheme provides functionally consistent cross-device web experiences by allowing fine-grained GUI distribution. We also confirmed that the in-browser virtual proxy reduces the GUI distribution time and the view change reproduction time by averages of 38.47% and 20.46%, respectively.
Speaker
Speaker biography is not available.

OpenINT: Dynamic In-Band Network Telemetry with Lightweight Deployment and Flexible Planning

Jiayi Cai (FuZhou University & Quan Cheng Laboratory, China); Hang Lin (Fuzhou University & Quan Cheng Laboratory, China); Tingxin Sun (Fuzhou University, China); Zhengyan Zhou (Zhejiang University, China); Longlong Zhu (Fuzhou University & Quan Cheng Laboratory, China); Haodong Chen (FuZhou University, China); Jiajia Zhou (Fuzhou University, China); Dong Zhang (Fuzhou University & Quan Cheng Laboratory, China); Chunming Wu (College of Computer Science, Zhejiang University, China)

0
The normal operation of data center network management tasks relies on accurate measurement of the network status. In-band Network Telemetry (INT) leverages programmable data planes to provide fine-grained and accurate network status. However, existing INT-related works have not considered the telemetry data required for dynamic adjustments of INT under uninterrupted conditions, including additions, deletions, and modifications. To address this issue, this paper proposes OpenINT, a lightweight and flexible In-band Network Telemetry system. The key innovation of OpenINT lies in decoupling telemetry operations in the data plane, using three generic sub-modules to achieve lightweight telemetry. Meanwhile, the control plane utilizes heuristic algorithms for dynamic planning to achieve near-optimal telemetry paths. Additionally, OpenINT provides primitives for defining network measurement tasks, which abstract the underlying telemetry architecture's details, enabling network operator to conveniently access network status. A prototype of OpenINT is implemented on a programmable switch equipped with the Tofino chip. Experimental results demonstrate that OpenINT achieves highly flexible dynamic telemetry and significantly reduces network overhead.
Speaker Jiayi Cai(Fuzhou University)

Jiayi Cai received the B.S. degree in Computer Science from Fuzhou University, Fuzhou, China. He is currently pursuing the M.S. degree in Computer Software and Theory from the College of Computer and Data Science, Fuzhou University. His research interests include Network Measurement and Programmable Data Plane.


Demeter: Fine-grained Function Orchestration for Geo-distributed Serverless Analytics

Xiaofei Yue, Song Yang and Liehuang Zhu (Beijing Institute of Technology, China); Stojan Trajanovski (Microsoft, United Kingdom (Great Britain)); Xiaoming Fu (University of Goettingen, Germany)

0
In the era of global service, low-latency analytics on large-volume geo-distributed data has been the regular demand for application decision-making. Serverless computing facilitates fast start-up and deployment, making it an attractive choice for geo-distributed analytics. We find that the serverless paradigm has the potential to breach current performance bottlenecks via fine-grained function orchestration. However, how to implement and configure it for geo-distributed analytics remains ambiguous. To fill this gap, we present Demeter, a scalable fine-grained function orchestrator in the geo-distributed serverless analytics system, which minimizes the average cost of co-existing jobs and satisfies user-specific Service Level Objectives (SLO). To handle the instability environment and learn diverse function resource demands, a Multi-Agent Reinforcement Learning (MARL) algorithm is used to co-optimize the function placement and resource allocation. The MARL extracts holistic and compact states via scalable hierarchical Graph Neural Networks (GNN) and then designs a novel actor network to reduce the decision space and model complexity. Finally, we implement the Demeter prototype and evaluate it using realistic analytics workloads. Compared with the state-of-the-art baselines, extensive experimental results show that Demeter significantly saves costs by 32.7% and 23.3% while eliminating SLO violations by 27.4%.
Speaker Xiaofei Yue (Beijing Institute of Technology)

Xiaofei Yue is currently a Ph.D. candidate at the School of Computer Science and Technology at Beijing Institute of Technology, Beijing, China. He received the M.E. degree from Northeastern University, Shenyang, China, in 2022. His main research interests include distributed systems, cloud/serverless computing, and data analytics.


Pscheduler: QoE-Enhanced MultiPath Scheduler for Video Services in Large-scale Peer-to-Peer CDNs

Dehui Wei and Jiao Zhang (Beijing University of Posts and Telecommunications, China); HaoZhe Li (ByteDance, China); Zhichen Xue (Bytedance Ltd., China); Jialin Li (National University of Singapore, Singapore); Yajie Peng (Bytedance, China); Xiaofei Pang (Non, China); Yuanjie Liu (Beijing University of Posts and Telecommunications, China); Rui Han (Bytedance Inc., China)

0
Video content providers such as Douyin implement Peer-to-Peer Content Delivery Networks (PCDNs) to reduce the costs associated with Content Delivery Networks (CDNs) while still maintaining optimal user-perceived quality of experience (QoE). PCDNs rely on the remaining resources of edge devices, such as edge access devices and hosts, to store and distribute data with a Multiple-Server-to-One-Client (MS2OC) communication pattern. MS2OC parallel transmission pattern suffers from severe data out-of-order issues. However, direct applying existing schedulers designed for MPTCP to PCDN fails to meet the two goals of high aggregate bandwidth and low end-to-end delivery latency.

To address this, we present the comprehensive detail of the Douyin self-developed PCDN video transmission system and propose the first QoE-enhanced packet-level scheduler for PCDN systems, called Pscheduler. Pscheduler estimates path quality using a congestion-control-decoupled algorithm and distributes data by the proposed path-pick-packet method to ensure smooth video playback. Additionally, a redundant transmission algorithm is proposed to improve the task download speed for segmented video transmission. Our large-scale online A/B tests, comprising 100,000 Douyin users that generate tens of millions of videos data, show that Pscheduler achieves an average improvement of 60% in goodput, 20% reduction in data delivery waiting time, and 30% reduction in rebuffering rate.
Speaker Dehui Wei (Beijing University of Posts and Telecommunications)

Dehui Wei is currently working toward her Ph.D. degree at the State Key Laboratory of Networking and Switching Technology of Beijing University of Posts and Telecommunications (BUPT). She received the B.E. degree in computer science and technology from Hunan University, Changsha, China, in 2019 and was awarded outstanding graduate. Her research interests are in the areas of network transmission control and cloud computing.


Session Chair

Eirini Eleni Tsiropoulou (University of New Mexico, USA)

Enter Zoom


Gold Sponsor


Gold Sponsor


Student Travel Grants


Student Travel Grants


Student Travel Grants

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · INFOCOM 2023 · © 2024 Duetone Corp.