Session E-4

E-4: Federated Learning 3

Conference
8:30 AM — 10:00 AM PDT
Local
May 22 Wed, 10:30 AM — 12:00 PM CDT
Location
Regency E

Federated Analytics-Empowered Frequent Pattern Mining for Decentralized Web 3.0 Applications

Zibo Wang and Yifei Zhu (Shanghai Jiao Tong University, China); Dan Wang (The Hong Kong Polytechnic University, Hong Kong); Zhu Han (University of Houston, USA)

0
The emerging Web 3.0 paradigm aims to decentralize existing web services, enabling desirable properties such as transparency, incentives, and privacy preservation. However, current Web 3.0 applications supported by blockchain infrastructure still cannot support complex data analytics tasks in a scalable and privacy-preserving way. This paper introduces the emerging federated analytics (FA) paradigm into the realm of Web 3.0 services, enabling data to stay local while still contributing to complex web analytics tasks in a privacy-preserving way. We propose FedWeb, a tailored FA design for important frequent pattern mining tasks in Web 3.0. FedWeb remarkably reduces the number of required participating data owners to support privacy-preserving Web 3.0 data analytics based on a novel distributed differential privacy technique. The correctness of mining results is guaranteed by a theoretically rigid candidate filtering scheme based on Hoeffding's inequality and Chebychev's inequality. Two response budget saving solutions are proposed to further reduce participating data owners. Experiments on three representative Web 3.0 scenarios show that FedWeb can improve data utility by ~25.3% and reduce the participating data owners by ~98.4%.
Speaker Zibo Wang (Shanghai Jiao Tong Univ.)



Federated Offline Policy Optimization with Dual Regularization

Sheng Yue and Zerui Qin (Tsinghua University, China); Xingyuan Hua (Beijing Institute of Technology, China); Yongheng Deng and Ju Ren (Tsinghua University, China)

0
Federated Reinforcement Learning (FRL) has been deemed as a promising solution for intelligent decision-making in the era of Artificial Internet of Things. However, existing FRL approaches often entail repeated interactions with the environment during local updating, which can be prohibitively expensive or even infeasible in many real-world domains. To overcome this challenge, this paper proposes a novel offline federated policy optimization algorithm, named DRPO, which enables distributed agents to collaboratively learn a decision policy only from private and static data without further environmental interactions. DRPO leverages dual regularization, incorporating both the local behavioral policy and the global aggregated policy, to judiciously cope with the intrinsic two-tier distributional shifts in offline FRL. Theoretical analysis characterizes the impact of the dual regularization on performance, demonstrating that by achieving the right balance thereof, DRPO can effectively counteract distributional shifts and ensures strict policy improvement in each federative learning round. Extensive experiments validate the significant performance gains of DRPO over baseline methods.
Speaker Sheng Yue (Tsinghua University)

Sheng Yue received his B.Sc. in mathematics (2017) and Ph.D. in computer science (2022), from Central South University, China. Currently, he is an assistant researcher with the Department of Computer Science and Technology, Tsinghua University, China. His research interests include network optimization, distributed learning, and reinforcement learning.


FedTC: Enabling Communication-Efficient Federated Learning via Transform Coding

Yixuan Guan, Xuefeng Liu and Jianwei Niu (Beihang University, China); Tao Ren (Institute of Software Chinese Academy of Sciences, China)

0
Federated learning (FL) enables distributed training via periodically synchronizing model updates among participants. Communication overhead becomes a dominant constraint of FL since participating clients usually suffer from limited bandwidth. To tackle this issue, top-$k$ based gradient compression techniques are extensively developed in FL context, manifesting powerful capabilities in reducing gradient volumes. However, these methods generally are conducted on original gradients where massive spatial redundancies exist and positions of non-zero parameters vary greatly between gradients, which impedes achievement of deeper compression. Top-$k$ sparsification may also degrade performance of trained models due to biased gradient estimations. Targeting above issues, we propose FedTC, a novel transform coding based compression framework. FedTC transforms gradients into a new domain with more concentric energy distributions, which facilitates reducing spatial redundancies and biases in subsequent sparsification. Furthermore, non-zero parameters across clients from different rounds become highly aligned in transform domain, motivating us to partition gradients into smaller parameter blocks with various alignment levels to better exploit these alignments. Lastly, positions and values of non-zero parameters are independently compressed in a block-wise manner with customized designs, through which a higher compression ratio is achieved. Theoretical analysis and extensive experiments both demonstrate effectiveness of our approach.
Speaker Yixuan Guan (Beihang University)

Yixuan Guan received his B.E. degree from Jilin University, Changchun, China, in 2016, and his M.E. degree from South China University of Technology, Guangzhou, China, in 2020. He is currently pursuing his Ph.D. degree from Beihang University, Beijing, China. His research interests include federated learning, data compression, and network communication.


Heroes: Lightweight Federated Learning with Neural Composition and Adaptive Local Update in Heterogeneous Edge Networks

Jiaming Yan, Jianchun Liu, Shilong Wang and Hongli Xu (University of Science and Technology of China, China); Haifeng Liu and Jianhua Zhou (Guangdong OPPO Mobile Telecommunications Corp., Ltd. Dongguan, Guangdong, China)

0
Federated Learning (FL) enables distributed clients to collaboratively train models without exposing their private data. However, it is difficult to implement efficient FL due to limited resources. Most existing works compress the transmitted gradients or prune the global model to reduce the resource cost, but leave the compressed or pruned parameters under-optimized, which degrades the training performance. To address this issue, the neural composition technique constructs size-adjustable models by composing low-rank tensors, allowing every parameter in the global model to learn the knowledge from all clients. Nevertheless, some tensors can only be optimized by a small fraction of clients, thus the global model may get insufficient training, leading to a long completion time, especially in heterogeneous edge scenarios. To this end, we enhance the neural composition technique, enabling all parameters to be fully trained. Further, we propose a lightweight FL framework, called Heroes, with enhanced neural composition and adaptive local update. A greedy-based algorithm is designed to adaptively assign the proper tensors and local update frequencies for participating clients according to their heterogeneous capabilities and resource budgets. Extensive experiments demonstrate that Heroes can reduce traffic consumption by about 72.05% and provide up to 2.97 times speedup compared to the baselines.
Speaker Jiaming Yan (University of Science and Technology of China)

Jiaming Yan received the B.S. degree in 2021 from Hefei University of Technology. He is currently a Ph.D. candidate in the School of Computer Science, University of Science and Technology of China (USTC). His main research interests are edge computing, deep learning and federated learning.


Session Chair

Ruidong Li (Kanazawa University, Japan)

Enter Zoom
Session E-5

E-5: Machine Learning with Transformers

Conference
10:30 AM — 12:00 PM PDT
Local
May 22 Wed, 12:30 PM — 2:00 PM CDT
Location
Regency E

Galaxy: A Resource-Efficient Collaborative Edge AI System for In-situ Transformer Inference

Shengyuan Ye and Jiangsu Du (Sun Yat-sen University); Liekang Zeng (Hong Kong University of Science and Technology (Guangzhou) & Sun Yat-Sen University, China); Wenzhong Ou (Sun Yat-sen University); Xiaowen Chu (The Hong Kong University of Science and Technology (Guangzhou) & The Hong Kong University of Science and Technology, Hong Kong); Yutong Lu (Sun Yat-sen University); Xu Chen (Sun Yat-sen University, China)

0
Transformer-based models have unlocked a plethora of powerful intelligent applications at the edge, such as voice assistant in smart home. Traditional deployment approaches offload the inference workloads to the remote cloud server, which would induce substantial pressure on the backbone network as well as raise users' privacy concerns. To address that, in-situ inference has been recently recognized for edge intelligence, but it still confronts significant challenges stemming from the conflict between intensive workloads and limited on-device computing resources. In this paper, we leverage our observation that many edge environments usually comprise a rich set of accompanying trusted edge devices with idle resources and propose Galaxy, a collaborative edge AI system that breaks the resource walls across heterogeneous edge devices for efficient Transformer inference acceleration. Galaxy introduces a novel hybrid model parallelism to orchestrate collaborative inference, along with a heterogeneity-aware parallelism planning for fully exploiting the resource potential. Furthermore, Galaxy devises a tile-based fine-grained overlapping of communication and computation to mitigate the impact of tensor synchronizations on inference latency under bandwidth-constrained edge environments. Extensive evaluation based on prototype implementation demonstrates that Galaxy remarkably outperforms state-of-the-art approaches under various edge environment setups, achieving up to \(2.5\times\) end-to-end latency reduction.
Speaker
Speaker biography is not available.

Industrial Control Protocol Type Inference Using Transformer and Rule-based Re-Clustering

Yuhuan Liu (The Hong Kong Polytechnic University & Southern University of Science and Technology, Hong Kong); Yulong Ding (Southern University of Science and Technology, China); Jie Jiang (China University of Petroleum-Beijing, China); Bin Xiao (The Hong Kong Polytechnic University, Hong Kong); Shuang-Hua Yang (Department of Computer Science, University of Reading, UK)

0
The development of the Industrial Internet of Things (IIoT) is impeded by the lack of unknown protocol specifications. Protocol Reverse Engineering (PRE) plays a crucial role in inferring unpublished protocol specifications by analyzing traffic messages. Since different types within a protocol often have distinct formats, inferring the protocol type is essential for subsequent reverse analysis. Natural Language Processing (NLP) models have demonstrated remarkable capabilities in various sequence tasks, and traffic messages of unknown protocols can be analyzed as sequences. In this paper, we propose a framework for clustering unknown industrial control protocol types. Our framework utilizes a transformer-based auto-encoder network to train corresponding request and response messages, leveraging intermediate layer embedding vectors learned by the network for clustering. The clustering results are employed to extract candidate keywords and establish empirical rules. Subsequently, rule-based re-clustering is performed, and its effectiveness is evaluated based on previous clustering results. Through this re-clustering process, we identify the most effective combination of keywords that define the type. We evaluate the proposed framework using three general protocols that have different type rules and successfully separate the protocol internal types completely.
Speaker
Speaker biography is not available.

OTAS: An Elastic Transformer Serving System via Token Adaptation

Jinyu Chen, Wenchao Xu and Zicong Hong (The Hong Kong Polytechnic University, China); Song Guo (The Hong Kong University of Science and Technology, Hong Kong); Haozhao Wang (Huazhong University of Science and Technology, China); Jie Zhang (The Hong Kong Polytechnic University, Hong Kong); Deze Zeng (China University of Geosciences, China)

0
Transformer model empowered architectures have become a pillar of cloud services that keeps reshaping our society. However, the dynamic query loads and heterogeneous user requirements severely challenge current transformer serving systems, which rely on pre-training multiple variants of a foundation model, i.e., with different sizes, to accommodate varying service demands. Unfortunately, such a mechanism is unsuitable for large transformer models due to the prohibitive training costs and excessive I/O delay. In this paper, we introduce OTAS, the first elastic serving system specially tailored for transformer models by exploring lightweight token management. We develop a novel idea called token adaptation that adds prompting tokens to improve accuracy and removes redundant tokens to accelerate inference. To cope with fluctuating query loads and diverse user requests, we enhance OTAS with application-aware selective batching and online token adaptation. OTAS first batches incoming queries with similar service-level objectives to improve the ingress throughput. Then, to strike a tradeoff between the overhead of token increment and the potentials for accuracy improvement, OTAS adaptively adjusts the token execution strategy by solving an optimization problem. We implement and evaluate a prototype of OTAS with multiple datasets, which show that OTAS improves the system utility by at least 18.2%.
Speaker
Speaker biography is not available.

T-PRIME: Transformer-based Protocol Identification for Machine-learning at the Edge

Mauro Belgiovine, Joshua B Groen, Miquel Sirera, Chinenye M Tassie, Sage Trudeau, Stratis Ioannidis and Kaushik Chowdhury (Northeastern University, USA)

0
Spectrum sharing allows different protocols of the same standard (e.g., 802.11 family) or different standards (e.g., LTE and DVB) to coexist in overlapping frequency bands. As this paradigm continues to spread, wireless systems must also evolve to identify active transmitters and unauthorized waveforms in real time under intentional distortion of preambles, extremely low signal-to-noise ratios and challenging channel conditions. This paper mitigates limitation of correlation-based preamble matching methods in such conditions through the design of T-PRIME: a transformer-based machine learning approach. T-PRIME learns the structural design of transmitted frames through its attention mechanism, looking at patterns of sequences that go beyond the preamble alone. The paper makes three contributions: First, it compares transformer models and demonstrates their superiority over traditional methods and convolutional neural networks. Second, it rigorously analyzes T-PRIME's real-time feasibility on DeepWave's AirT platform. Third, it utilizes an extensive 66 GB dataset of over-the-air (OTA) WiFi transmissions for training, which is released along with the code for community use. Results reveal nearly perfect (i.e. >98%) classification accuracy under simulated scenarios, showing 100% detection improvement over legacy methods in low SNR ranges, 97% classification accuracy for OTA single-protocol transmissions and up to 75% double-protocol classification accuracy in interference scenarios.
Speaker
Speaker biography is not available.

Session Chair

Minghua Chen (City University of Hong Kong, Hong Kong)

Enter Zoom
Session E-6

E-6: Federated Learning 4

Conference
1:30 PM — 3:00 PM PDT
Local
May 22 Wed, 3:30 PM — 5:00 PM CDT
Location
Regency E

A Semi-Asynchronous Decentralized Federated Learning Framework via Tree-Graph Blockchain

Cheng Zhang, Yang Xu and Xiaowei Wu (Hunan University, China); En Wang (Jilin University, China); Hongbo Jiang (Hunan University, China); Yaoxue Zhang (Tsinghua University, China)

0
Decentralized federated learning (DFL) overcomes the single point of failure issue of centralized Federated Learning. Building upon DFL, blockchain-based federated learning (BFL) takes further strides in establishing trust, enhancing security, and fault tolerance, with its commercial products serving various domains. However, BFL, which is based on the classical linear structure blockchain, is limited by the performance bottleneck and is less efficient. Some recent schemes introduce the directed acyclic graph (DAG) blockchain as the underlying structure, improving performance at the expense of computational verifiability and facing some security risks. In this paper, we propose TGFL, a decentralized federated learning framework based on the Tree-Graph blockchain. The underlying structure of TGFL is designed as a block-centered DAG blockchain to support semi-asynchronous training. The iterative relationship between models is represented as a tree-graph composed of blocks. To facilitate fast convergence, we design a pivot chain generation algorithm that topologically sorts the asynchronous training process, guiding participants in sampling appropriate models. The effectiveness of model updates is checked as part of TGFL's weak consistency consensus mechanism. We discuss adaptive attacks and defenses against TGFL, and validate its effectiveness through experimental evaluations with four baseline approaches.
Speaker
Speaker biography is not available.

Momentum-Based Federated Reinforcement Learning with Interaction and Communication Efficiency

Sheng Yue (Tsinghua University, China); Xingyuan Hua (Beijing Institute of Technology, China); Lili Chen and Ju Ren (Tsinghua University, China)

0
Federated Reinforcement Learning (FRL) has garnered increasing attention recently. However, due to the intrinsic spatio-temporal non-stationarity of data distributions, the current approaches typically suffer from high interaction and communication costs. In this paper, we introduce a new FRL algorithm, named MFPO, that utilizes momentum, importance sampling, and additional server-side adjustment to control the shift of stochastic policy gradients and enhance the efficiency of data utilization. We prove that by proper selection of momentum parameters and interaction frequency, MFPO can achieve \(\tilde{\mathcal{O}}(H N^{-1}\epsilon^{-3/2})\) and \(\tilde{\mathcal{O}}(\epsilon^{-1})\) interaction and communication complexities (\(N\) represents the number of agents), where the interaction complexity achieves linear speedup with the number of agents, and the communication complexity aligns the best achievable of existing first-order FL algorithms. Extensive experiments corroborate the substantial performance gains of MFPO over existing methods on a suite of complex and high-dimensional benchmarks.
Speaker Sheng Yue (Tsinghua University)

Sheng Yue received his B.Sc. in mathematics (2017) and Ph.D. in computer science (2022), from Central South University, China. Currently, he is an assistant researcher with the Department of Computer Science and Technology, Tsinghua University, China. His research interests include network optimization, distributed learning, and reinforcement learning.


SpreadFGL: Edge-Client Collaborative Federated Graph Learning with Adaptive Neighbor Generation

Luying Zhong, Yueyang Pi and Zheyi Chen (Fuzhou University, China); Zhengxin Yu (Lancaster University, United Kingdom (Great Britain)); Wang Miao (University of Plymouth, United Kingdom (Great Britain)); Xing Chen (Fuzhou University, China); Geyong Min (University of Exeter, United Kingdom (Great Britain))

0
Federated Graph Learning (FGL) has garnered widespread attention by enabling collaborative training on multiple clients for semi-supervised classification tasks. However, most existing FGL studies do not well consider the missing inter-client topology information in real-world scenarios, causing insufficient feature aggregation of multi-hop neighbor clients during model training. Moreover, the classic FGL commonly adopts the FedAvg but neglects the high training costs when the number of clients expands, resulting in the overload of single edge server. To address these important challenges, we propose a novel FGL framework, named SpreadFGL, to promote the information flow in edge-client collaboration and extract more generalized potential relationships between clients. In SpreadFGL, an adaptive graph imputation generator incorporated with a versatile assessor is first designed to exploit the potential links between subgraphs, without sharing raw data. Next, a new negative sampling mechanism is developed to make SpreadFGL concentrate on more refined information in downstream tasks. To facilitate load balancing at the edge layer, SpreadFGL follows a distributed training manner that enables fast model convergence. Using real-world testbed and benchmark graph datasets, extensive experiments demonstrate the effectiveness of the proposed SpreadFGL. The results show that SpreadFGL achieves higher accuracy and faster convergence against the state-of-the-art algorithms.
Speaker Luying Zhong (Fuzhou University)

Luying Zhong received the B.S. degree in Computer Science from Fuzhou University, Fuzhou, China. She is currently pursuing the doctoral degree in the College of Computer and Data Science, Fuzhou University. Her research interests include Edge Computing, Federated Learning, and Graph Learning.


Strategic Data Revocation in Federated Unlearning

Ningning Ding, Ermin Wei and Randall A Berry (Northwestern University, USA)

0
By allowing users to erase their data's impact on federated learning models, federated unlearning protects users' right to be forgotten and data privacy. Despite a burgeoning body of research on federated unlearning's technical feasibility, there is a paucity of literature investigating the considerations behind users' requests for data revocation. This paper proposes a non-cooperative game framework to study users' data revocation strategies in federated unlearning. We prove the existence of a Nash equilibrium. However, users' best response strategies are coupled via model performance and unlearning costs, which makes the equilibrium computation challenging. We obtain the Nash equilibrium by establishing its equivalence with a much simpler auxiliary optimization problem. We also summarize users' multi-dimensional attributes into a single-dimensional metric and derive the closed-form characterization of an equilibrium, when users' unlearning costs are negligible. Moreover, we compare the cases of allowing and forbidding partial data revocation in federated unlearning. Interestingly, the results reveal that allowing partial revocation does not necessarily increase users' data contributions or payoffs due to the game structure. Additionally, we demonstrate that positive externalities may exist between users' data revocation decisions when users incur unlearning costs, while this is not the case when their unlearning costs are negligible.
Speaker Ningning Ding (Northwestern University)

Ningning Ding is a Postdoctoral Scholar with the Department of Electrical and Computer Engineering, Northwestern University, USA. She received her Ph.D. degree at The Chinese University of Hong Kong. Her research focuses on the interdisciplinary area involving artificial intelligence, network systems, and network economics.


Session Chair

Hanif Rahbari (Rochester Institute of Technology, USA)

Enter Zoom
Session E-7

E-7: Machine Learning 1

Conference
3:30 PM — 5:00 PM PDT
Local
May 22 Wed, 5:30 PM — 7:00 PM CDT
Location
Regency E

Expediting Distributed GNN Training with Feature-only Partition and Optimized Communication Planning

Bingqian Du and Jun Liu (Huazhong University of Science and Technology, China); Ziyue Luo (The Ohio State University, USA); Chuan Wu (The University of Hong Kong, Hong Kong); Qiankun Zhang- and Hai Jin (Huazhong University of Science and Technology, China)

0
Feature-only partition of large graph data in distributed Graph Neural Network (GNN) training offers advantages over commonly adopted graph structure partition, such as minimal graph preprocessing cost and elimination of cross-worker subgraph sampling burdens. Nonetheless, performance bottleneck of GNN training with feature-only partitions still largely lies in the substantial communication overhead due to cross-worker feature fetching. To reduce the communication overhead and expedite distributed training, we first investigate and answer two key questions on convergence behaviors of GNN model in feature-partition based distribute GNN training: 1) As no worker holds a complete copy of each feature, can gradient exchange among workers compensate for the information loss due to incomplete local features? 2) If the answer to the first question is negative, is feature fetching in every training iteration of the GNN model necessary to ensure model convergence? Based on our theoretical findings on these questions, we derive an optimal communication plan that decides the frequency for feature fetching during the training process, taking into account bandwidth levels among workers and striking a balance between model loss and training time. Extensive evaluation demonstrates consistent results with our theoretical analysis, and the effectiveness of our proposed design.
Speaker Bingqian Du(Huazhong University of Science and Technology)



Workflow Optimization for Parallel Split Learning

Joana Tirana (University College Dublin and VistaMilk SFI, Ireland); Dimitra Tsigkari (Telefonica Research, Spain); George Iosifidis (Delft University of Technology, The Netherlands); Dimitris Chatzopoulos (University College Dublin, Ireland)

0
Split learning (SL) has been recently proposed as a way to enable resource-constrained devices to train multi-parameter neural networks (NNs) and participate in federated learning (FL). In a nutshell, SL splits the NN model into parts, and allows clients (devices) to offload the largest part as a processing task to a computationally powerful helper. In parallel SL, multiple helpers can process model parts of one or more clients, thus, considerably reducing the maximum training time over all clients (makespan). In this paper, we focus on orchestrating the workflow of this operation, which is critical in highly heterogeneous systems, as our experiments show. In particular, we formulate the joint problem of client-helper assignments and scheduling decisions with the goal of minimizing the training makespan, and we prove that it is NP-hard. We propose a solution method based on the decomposition of the problem by leveraging its inherent symmetry, and a second one that is fully scalable. A wealth of numerical evaluations using our testbed's measurements allow us to build a solution strategy comprising these methods. Moreover, we show that this strategy finds a near-optimal solution, and achieves a shorter makespan than the baseline scheme by up to 52.3%.
Speaker
Speaker biography is not available.

Learning to Decompose Asymmetric Channel Kernels for Generalized Eigenwave Multiplexing

Zhibin Zou, Iresha Amarasekara and Aveek Dutta (University at Albany, SUNY, USA)

0
Learning the principal eigenfunctions of a kernel is at the core of many machine-learning problems. Common methods usually deal with symmetric kernels based on Mercer's Theorem. However, in the communication systems, the channel kernel is usually asymmetric due to the inconsistencies between the uplink and the downlink propagation environment. In this paper, we propose an explainable Neural Network for extracting eigenfunctions from generic multi-dimensional asymmetric channel kernels based on a recent method called High Order Generalized Mercer's Theorem (HOGMT), by decomposing it into jointly orthogonal eigenfunctions. The proposed neural network based approach is efficient and can be easily implemented compared to the conventional SVD based solutions used for eigen decomposition. We also discuss the effect of different hyper-parameters on the training time, constraint satisfaction, and overall performance. Finally, we show that multiplexing using these eigenfunctions mitigates interference across all the available Degrees of Freedom (DoF), both mathematically as well as via neural network based system-level simulations.
Speaker
Speaker biography is not available.

META-MCS: A Meta-knowledge Based Multiple Data Inference Framework

Zijie Tian, En Wang, Wenbin Liu, Baoju Li and Funing Yang (Jilin University, China)

0
Mobile crowdsensing (MCS) is a paradigm for data collection with the limitation of budgets and worker availability. The central strategy of MCS is recruiting workers to sense a part of data and subsequently infer the unsensed data. To infer unsensed data, prior research has proposed several algorithms that do not require historical data, but their inference accuracy is very limited. More effective works are training a model with sufficient historical data. However, such methods can't infer data with few to none historical data. A more promising strategy is training models from other similar datasets that have been sensed. However, such datasets are different in terms of sensing locations, numbers of sensed data and data types. Such variance introduces the complex issue of integrating knowledge from these datasets and then training inference models. To solve these, we propose a meta-knowledge based multiple data inference framework named META-MCS. In META-MCS, we propose a similarity evaluation model TMFS. Following this, we cluster similar datasets and train generalized models for each cluster. Finally, META-MCS selects an appropriate model to infer unsensed data. We validate our proposed methods through extensive experiments using ten different datasets, which substantiate the effectiveness of our framework.
Speaker Zijie Tian (Jilin University)

Zijie Tian, is a master student in computer science and technology from the Jilin University, China, and he will get his degree in this year. His research interest is focusing on the multi-tasks' Sparse Mobile Crowdsensing.


Session Chair

Mariya Zheleva (UAlbany SUNY, USA)

Enter Zoom


Gold Sponsor


Gold Sponsor


Student Travel Grants


Student Travel Grants


Student Travel Grants

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · INFOCOM 2023 · © 2024 Duetone Corp.