Session D-1

D-1: Federated Learning 1

Conference
11:00 AM — 12:30 PM PDT
Local
May 21 Tue, 1:00 PM — 2:30 PM CDT
Location
Regency E

AeroRec: An Efficient On-Device Recommendation Framework using Federated Self-Supervised Knowledge Distillation

Tengxi Xia and Ju Ren (Tsinghua University, China); Rao Wei, Zu Qin, Wang Wenjie and Chen Shuai (Mei Tuan, China); Yaoxue Zhang (Tsinghua University, China)

1
Modern recommendation systems, relying on centralized servers, necessitate users to upload their behavior data, raising significant privacy concerns. Federated learning (FL), designed to uphold user privacy, is emerging as an optimal solution. By fusing FL with recommendation systems, Federated Recommendation Systems (FRS) enable collaborative model training without exposing individual user data. Yet, current federated frameworks often overlook the constraints of mobile devices, such as limited storage and computational capabilities. While lightweight models can be apt for these devices, they need help with achieving desired accuracy, especially given the communication overheads and sparsity of recommendation data. We propose, AeroRec, an efficient on-device Federated Recommendation Framework. It employs federated self-supervised distillation to enhance the global model, promoting rapid convergence and surpassing typical lightweight model constraints. We demonstrate that AeroRec outperforms several state-of-the-art FRS frameworks regarding recommendation accuracy and convergence speed through extensive experiments on three real-world datasets.
Speaker Tengxi Xia (Tsinghua University)

Hello everyone, my name is Xia Tengxi. I completed my undergraduate degree in Software Engineering at Harbin University of Science and Technology. I am currently pursuing a doctoral degree in the Computer Science Department at Tsinghua University.


Agglomerative Federated Learning: Empowering Larger Model Training via End-Edge-Cloud Collaboration

Zhi Yuan Wu and Sheng Sun (Institute of Computing Technology, Chinese Academy of Sciences, China); Yuwei Wang (Institute of Computing Technology Chinese Academy of Sciences, China); Min Liu (Institute of Computing Technology, Chinese Academy of Sciences, China); Bo Gao (Beijing Jiaotong University, China); Quyang Pan, Tianliu He and Xuefeng Jiang (Institute of Computing Technology, China)

1
Federated Learning (FL) enables training Artificial Intelligence (AI) models over end devices without compromising their privacy. As computing tasks are increasingly performed by a combination of cloud, edge, and end devices, FL can benefit from this End-Edge-Cloud Collaboration (EECC) paradigm to achieve collaborative device-scale expansion with real-time access. Although Hierarchical Federated Learning (HFL) supports multi-tier model aggregation suitable for EECC, prior works assume the same model structure on all computing nodes, constraining the model scale by the weakest end devices. To address this issue, we propose Agglomerative Federated Learning (FedAgg), which is a novel EECC-empowered FL framework that allows the trained models from end, edge, to cloud to grow larger in size and stronger in generalization ability. FedAgg recursively organizes computing nodes among all tiers based on Bridge Sample Based Online Distillation Protocol (BSBODP), which enables every pair of parent-child computing nodes to mutually transfer and distill knowledge extracted from generated bridge samples. This design enhances the performance by exploiting the potential of larger models, with privacy constraints of FL and flexibility requirements of EECC both satisfied.
Experiments under various settings demonstrate that FedAgg outperforms state-of-the-art methods by an average of 4.53% accuracy gains and remarkable improvements in convergence rate.
Speaker Zhiyuan Wu (Institute of Computing Technology, Chinese Academy of Sciences)

Zhiyuan Wu currently is a research assistant with the Institute of Computing Technology, Chinese Academy of Sciences (ICT, CAS). He has contributed several technical papers to top-tier conferences and journals as the first author in the fields of computer architecture, computer networks, and intelligent systems, including IEEE Transactions on Parallel and Distributed Systems (TPDS), IEEE Transactions on Mobile Computing (TMC), IEEE International Conference on Computer Communications (INFOCOM), and ACM Transactions on Intelligent Systems and Technology (TIST). He has served as a technical program committee member or a reviewer for over 10 conferences and journals, and was invited to serve as a session chair for the International Conference on Computer Technology and Information Science (CTIS). He is a member of IEEE, ACM, the China Computer Federation (CCF), and is granted the President Special Prize of ICT, CAS. His research interests include federated learning, mobile edge computing, and distributed systems.


BR-DeFedRL: Byzantine-Robust Decentralized Federated Reinforcement Learning with Fast Convergence and Communication Efficiency

Jing Qiao (Shandong University, China); Zuyuan Zhang (George Washington University, USA); Sheng Yue (Tsinghua University, China); Yuan Yuan (Shandong University, China); Zhipeng Cai (Georgia State University, USA); Xiao Zhang (Shandong University, China); Ju Ren (Tsinghua University, China); Dongxiao Yu (Shandong University, China)

0
In this paper, we propose Byzantine-Robust Decentralized Federated Reinforcement Learning (BR-DeFedRL), an innovative framework that effectively combats the harmful influence of Byzantine agents by adaptively adjusting communication weights, thereby significantly enhancing the robustness of the learning system. By leveraging decentralized learning, our approach eliminates the dependence on a central server. Striking a harmonious balance between communication round count and sample complexity, BR-DeFedRL achieves efficient convergence with a rate of O(1/(TN)), where T denotes the communication rounds and N represents the local steps related to variance reduction. Notably, each agent attains an ε-approximation with a state-of-the-art sample complexity of O(1/(ε N)+1/ε). Extensive experimental validations further affirm the efficacy of BR-DeFedRL, making it a promising and practical solution for Byzantine-robust decentralized federated reinforcement learning.
Speaker
Speaker biography is not available.

Breaking Secure Aggregation: Label Leakage from Aggregated Gradients in Federated Learning

Zhibo Wang, Zhiwei Chang and Jiahui Hu (Zhejiang University, China); Xiaoyi Pang (Wuhan University, China); Jiacheng Du (Zhejiang University, China); Yongle Chen (Taiyuan University of Technology, China); Kui Ren (Zhejiang University, China)

0
Federated Learning (FL) exhibits privacy vulnerabilities under gradient inversion attacks (GIAs), which can extract private information from individual gradients. To enhance privacy, FL incorporates Secure Aggregation (SA) to prevent the server from obtaining individual gradients, thus effectively resisting GIAs. In this paper, we propose a stealthy label inference attack to bypass SA and recover individual clients' private labels. Specifically, we conduct a theoretical analysis of label inference from the aggregated gradients that are exclusively obtained after implementing SA. The analysis results reveal that the inputs (embeddings) and outputs (logits) of the final fully connected layer (FCL) contribute to gradient disaggregation and label restoration. To preset the embeddings and logits of FCL, we craft a fishing model by solely modifying the parameters of a single batch normalization (BN) layer in the original model. Distributing client-specific fishing models, the server can derive the individual gradients regarding the bias of FCL by resolving a linear system with expected embeddings and the aggregated gradients as coefficients. Then the labels of each client can be precisely computed based on preset logits and gradients of FCL's bias. Extensive experiments show that our attack achieves large-scale label recovery with 100\% accuracy on various datasets and model architectures.
Speaker Zhiwei Chang (Zhejiang University)

Hi, I am Zhiwei Chang, a graduate student at the School of Computer Science, Zhejiang University and my research focuses on security and privacy issues in federated learning.


Session Chair

Qin Hu (IUPUI, USA)

Enter Zoom
Session D-2

D-2: Multi-Armed Bandits

Conference
2:00 PM — 3:30 PM PDT
Local
May 21 Tue, 4:00 PM — 5:30 PM CDT
Location
Regency E

Achieving Regular and Fair Learning in Combinatorial Multi-Armed Bandit

Xiaoyi Wu and Bin Li (The Pennsylvania State University, USA)

0
Combinatorial multi-armed bandit refers to the model that aims to maximize cumulative rewards in the presence of uncertainty. Motivated by two important wireless network applications: multi-user interactive and panoramic scene delivery and timely information delivery of sensing sources, in addition to maximizing cumulative rewards, it is important to ensure fairness among arms (i.e., the minimum average reward required by each arm) and reward regularity (i.e., how often each arm receives the reward). In this paper, we develop a parameterized regular and fair learning algorithm to achieve these three objectives. In particular, the proposed algorithm linearly combines virtual queue-lengths (tracking the fairness violations), Time-Since-Last-Reward (TSLR) metrics, and Upper Confidence Bound (UCB) estimates in its weight measure. Here, TSLR is similar to age-of-information and measures the elapsed number of rounds since the last time an arm received a reward, capturing the reward regularity performance, and UCB estimates are utilized to balance the tradeoff between exploration and exploitation in online learning. Through capturing a key relationship between virtual queue-lengths and TSLR metrics and utilizing several non-trivial Lyapunov functions, we analytically characterize zero cumulative fairness violation, reward regularity, and cumulative regret performance under our proposed algorithm. These findings are corroborated by our extensive simulations.
Speaker
Speaker biography is not available.

Adversarial Combinatorial Bandits with Switching Cost and Arm Selection Constraints

Yin Huang (University of Miami, USA); Qingsong Liu (Tsinghua University, China); Jie Xu (University of Miami, USA)

0
The multi-armed bandits (MAB) framework is widely used for sequential decision-making under uncertainty. To address the increasing complexity of real-world systems and their operational requirements, researchers have proposed and studied various
extensions to the basic MAB framework. In this paper, we focus on an adversarial MAB problem inspired by real-world systems with combinatorial semi-bandit arms, switching costs, and anytime cumulative arm selection constraints. To tackle this challenging problem, we introduce the Block-structured Follow-the-Regularized-Leader (B-FTRL) algorithm. Our approach employs a hybrid Tsallis-Shannon entropy regularizer in arm selection and incorporates a block structure that divides time into blocks to minimize arm switching costs. The theoretical analysis shows that B-FTRL achieves a reward regret bound of \(O(T^\frac{2a-b+1}{1+a} + T^\frac{b}{1+a})\) and a switching regret bound of \(O(T^\frac{1}{1+a})\). By carefully selecting the values of \(a\) and \(b\), we are able to limit the total regret to \(O(T^{2/3})\) while satisfying the arm selection constraints in expectation. This outperforms the state-of-the-art regret bound of \(O(T^{3/4})\) and expected constraint violation bound \(o(1)\), which are derived in less challenging stochastic reward environments. Additionally, we provide a high-probability constraint violation bound of \(O(\sqrt{T})\). Numerical results are presented to demonstrate its superiority in comparison to other existing methods.
Speaker
Speaker biography is not available.

Backlogged Bandits: Cost-Effective Learning for Utility Maximization in Queueing Networks

Juaren Steiger (Queen's University, Canada); Bin Li (The Pennsylvania State University, USA); Ning Lu (Queen's University, Canada)

0
Bipartite queueing networks with unknown statistics, where jobs are routed to and queued at servers and yield server-dependent utilities upon completion, model a wide range of problems in communications and related research areas (e.g., call routing in call centers, task assignment in crowdsourcing, job dispatching to cloud servers). The utility maximization problem in bipartite queueing networks with unknown statistics is a bandit learning problem with delayed semi-bandit feedback that depends on the server queueing delay. In this paper, we propose an efficient algorithm that overcomes the technical shortcomings of the state-of-the-art and achieves square root regret, queue length, and feedback delay. Our approach also accommodates additional constraints, such as quality of service, fairness, and budgeted cost constraints, with constant expected peak violation and zero expected violation after a fixed timeslot. Empirically, our algorithm's regret is competitive with the state-of-the-art for some problem instances and outperforms it in others, with much lower delay and constraint violation.
Speaker Juaren Steiger (Queen's University)

Juaren Steiger is a PhD student at Queen's University in Canada studying machine learning and its applications to communication networks.


Edge-MSL: Split Learning on the Mobile Edge via Multi-Armed Bandits

Taejin Kim (CACI Intl. Inc. & Carnegie Mellon University, USA); Jinhang Zuo (University of Massachusetts Amherst & California Institute of Technology, USA); Xiaoxi Zhang (Sun Yat-sen University, China); Carlee Joe-Wong (Carnegie Mellon University, USA)

0
The emergence of 5G technology and edge computing enables the collaborative use of data by mobile users for scalable training of machine learning models. Privacy concerns and communication constraints, however, can prohibit users from offloading their data to a single server for training. Split learning, in which models are split between end users and a central server, somewhat resolves these concerns but requires exchanging information between users and the server in each local training iteration. Thus, splitting models between end users and geographically close edge servers can significantly reduce communication latency and training time. In this setting, users must decide to which edge servers they should offload part of their model to minimize the training latency, a decision that is further complicated by the presence of multiple, mobile users competing for resources. We present Edge-MSL, a novel formulation of the mobile split learning problem as a contextual multi-armed bandits framework. To counter scalability challenges with a centralized Edge-MSL solution, we introduce a distributed solution that minimizes competition between users for edge resources, reducing regret by at least two times compared to a greedy baseline. The distributed Edge-MSL approach improves trained model convergence with a 15% increase in test accuracy.
Speaker Taejin Kim

Taejin Kim is currently a research engineer at CACI, currently working in the area of distributed machine learning systems and security. Prior to joining CACI, he was a PhD student at Carnegie Mellon University, performing research in the area of mobile edge computing and distributed learning optimization.


Session Chair

Bo Ji (Virginia Tech, USA)

Enter Zoom
Session D-3

D-3: Federated Learning 2

Conference
4:00 PM — 5:30 PM PDT
Local
May 21 Tue, 6:00 PM — 7:30 PM CDT
Location
Regency E

Fed-CVLC: Compressing Federated Learning Communications with Variable-Length Codes

Xiaoxin Su (Shenzhen University, China); Yipeng Zhou (Macquarie University, Australia); Laizhong Cui (Shenzhen University, China); John C.S. Lui (The Chinese University of Hong Kong, Hong Kong); Jiangchuan Liu (Simon Fraser University, Canada)

0
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds, without touching private data owned by individual clients. FL is appealing in preserving data privacy; yet the communication between the PS and scattered clients can be a severe bottleneck. Model compression algorithms, such as quantization and sparsification, have been suggested but they generally assume a fixed code length, which does not reflect the heterogeneity and variability of model updates. In this paper, through both analysis and experiments, we show strong evidences that variable-length is beneficial for compression in FL. We accordingly present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response of the dynamics of model updates. We develop optimal tuning strategies that minimizes the loss function (equivalent to maximizing the model utility) subject to the budget for communication. We further demonstrate that Fed-CVLC is indeed a general compression design that bridges quantization and sparsification, with greater flexibility. Extensive experiments have been conducted with public datasets to demonstrate that Fed-CVLC remarkably outperforms state-of-the-art baselines, improving model utility by 1.50%-5.44%, or shrinking communication traffic by 16.67%-41.61%.
Speaker
Speaker biography is not available.

Titanic: Towards Production Federated Learning with Large Language Models

Ningxin Su, Chenghao Hu and Baochun Li (University of Toronto, Canada); Bo Li (Hong Kong University of Science and Technology, Hong Kong)

0
With the recent surge of research interests in Large Language Models (LLMs), a natural question that arises is how pre-trained LLMs can be fine-tuned to tailor to specific needs of enterprises and individual users, while preserving the privacy of data used in the fine-tuning process. On the one hand, sending private data to cloud datacenters for fine-tuning is, without a doubt, unacceptable from a privacy perspective. On the other hand, conventional federated learning requires each client to perform local training, which is not feasible for LLMs with respect to both computation costs and communication overhead. In this paper, we present Titanic, a new systems-oriented framework that allows LLMs to be fine-tuned in a privacy-preserving fashion directly on the client devices where private data is produced, while operating within the resource constraints on computation and communication bandwidth. Titanic first optimally selects a subset of clients with an efficient solution to an integer optimization problem, then partitions an LLM across multiple client devices, and finally fine-tunes the model with no or minimal losses in training performance. Our experimental results show that Titanic achieves superior training performance as compared to conventional federated learning, while preserving data privacy.
Speaker Ningxin Su (University of Toronto)

Ningxin Su is a fourth-year Ph.D. student in the Department of Electrical and Computer Engineering, University of Toronto, under the supervision of Prof. Baochun Li. She received her M.E. and B.E. degrees from the University of Sheffield and Beijing University of Posts and Telecommunications in 2020 and 2019, respectively. Her research area includes distributed machine learning, federated learning and networking. Her website is located at ningxinsu.github.io.


FairFed: Improving Fairness and Efficiency of Contribution Evaluation in Federated Learning via Cooperative Shapley Value

Yiqi Liu, Shan Chang and Ye Liu (Donghua University, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong); Cong Wang (City University of Hong Kong, Hong Kong)

0
The quality of federated learning (FL) is highly correlated with the number and quality of the participants involved. It is essential to design proper contribution evaluation mechanisms. Shapley Value (SV)-based techniques have been widely used to provide fair contribution evaluation. Existing approaches, however, donot support dynamic participants (e.g., joining and departure) and incur significant computation costs, making them difficult to apply in practice. Worse, participants may be incorrectly valued as negative contribution under the non-IID data scenarios, further jeopardizing fairness. In this work, we propose FairFed to address above challenges. First, given that each iteration is of equal importance, FairFed treats FL as multiple Single-stage Cooperative Games, and evaluates participants by each iteration for effectively coping with dynamic participants and ensuring fairness across iterations. Second, we introduce Cooperative Shapley Value (CSV) to rectify negative values of participants for improving the fairness while preserving true negative values. Third, we prove if participants are Strategically Equivalent, the number of participant combinations can be sharply reduced from exponential to polynomial, thus significantly reducing computational complexity of CSV. Experimental results show that FairFed achieves up to 25.3x speedup and reduces deviations by three orders of magnitude to two state-of-the-art approximation approaches.
Speaker Yiqi Liu (Donghua University)



Federated Learning While Providing Model as a Service: Joint Training and Inference Optimization

Pengchao Han (Guangdong University of Technology, China); Shiqiang Wang (IBM T. J. Watson Research Center, USA); Yang Jiao (Tongji University, China); Jianwei Huang (The Chinese University of Hong Kong, Shenzhen, China)

0
While providing machine learning model as a service to process users' inference requests, online applications can periodically upgrade the model utilizing newly collected data. Federated learning (FL) is beneficial for enabling the training of models across distributed clients while keeping the data locally. However, existing work has overlooked the coexistence of model training and inference under clients' limited resources. This paper focuses on the joint optimization of model training and inference to maximize inference performance at clients. Such an optimization faces several challenges. The first challenge is to characterize the clients' inference performance when clients may partially participate in FL. To resolve this challenge, we introduce a new notion of age of model (AoM) to quantify client-side model freshness, based on which we use FL's global model convergence error as an approximate measure of inference performance. The second challenge is the tight coupling among clients' decisions, including participation probability in FL, model download probability, and service rates. Toward the challenges, we propose an online problem approximation to reduce the problem complexity and optimize the resources to balance the needs of model training and inference. Experimental results demonstrate that the proposed algorithm improves the average inference accuracy by up to 12%.
Speaker
Speaker biography is not available.

Session Chair

Christopher G. Brinton (Purdue University, USA)

Enter Zoom


Gold Sponsor


Gold Sponsor


Student Travel Grants


Student Travel Grants


Student Travel Grants

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · INFOCOM 2023 · © 2024 Duetone Corp.