Workshops

The 3rd International IEEE INFOCOM Workshop on Distributed Machine Learning and Fog Networks (FOGML 2024)

Session FOGML-S1

FOGML 2024 – Session 1: Robust and Private Distributed Machine Learning

Conference
8:30 AM — 9:45 AM PDT
Local
May 20 Mon, 11:30 AM — 12:45 PM EDT
Location
Plaza A

A Data Reconstruction Attack against Vertical Federated Learning based on Knowledge Transfer

Takumi Suimon (Osaka Universtiy, Japan); Yuki Koizumi and Junji Takemasa (Osaka University, Japan); Toru Hasegawa (Shimane University, Japan)

0
Vertical federated learning is a method where multiple parties, each holding data on different features of the same samples, collaboratively train a machine learning model without exposing their original data. Naive vertical federated learning systems face severe limitations degrading its usability; for example, it requires the involvement of all parties even to perform inference. A state-of-the-art approach overcomes the limitations by employing knowledge transfer techniques based on federated singular value decomposition. In this framework, parties collaboratively derive latent represents of their data without exposing the data to each other. Each party can then independently train a model and perform inference on that model, enabling more practical vertical federated learning. Despite the fact that the original data of each party is not exposed to other parties during these procedures, we reveal that a semi-honest adversary can deduce the original data from its latent representation. The essential idea behind the proposed attack is that there is a linear relationship between the original data and the corresponding latent representation. Our proposed attack deduces the relationship using a meta heuristic approach based on a GAN-like neural network. We confirm the feasibility of the attack using several famous datasets.
Speaker Takumi Suimon
Speaker biography is not available.

Joint Client Selection and Privacy Compensation for Differentially Private Federated Learning

Ruichen Xu and Ying Jun (Angela) Zhang (The Chinese University of Hong Kong, Hong Kong); Jianwei Huang (The Chinese University of Hong Kong, Shenzhen, China)

0
Differentially private federated learning learns a global machine learning model from clients' distributed private data while providing privacy protections. However, clients under differential privacy protection still sustain a certain degree of potential privacy leakage, which can result in the clients' reluctance to participate in model training. Hence, both client selection and privacy compensation are important decisions to determine which clients will join the learning. In particular, since a client's privacy leakage increases with his participation frequency, the client selection decision is tightly coupled with the privacy compensation. This paper proposes a joint client selection and privacy compensation Bayesian-optimal mechanism design approach. Despite being a challenging non-convex optimization problem, we propose an efficient algorithm to solve it. We first characterize the optimal selection probability of clients with heterogeneous privacy sensitivities. This characterization significantly reduces the dimension of the problem and allows us to propose an efficient algorithm to solve the problem. Numerical results show that the proposed mechanism Pareto dominates the unbiased selection-based mechanism in terms of both test accuracy and monetary costs. Specifically, our mechanism decreases the test loss by up to 81.7% under the same monetary cost.
Speaker Ruichen Xu
Speaker biography is not available.

TrustBandit: Optimizing Client Selection for Robust Federated Learning Against Poisoning Attacks

Biniyam Deressa and Anwar Hasan (University of Waterloo, Canada)

0
Federated learning enables collaborative model training with privacy preservation. In this framework, individual clients locally train and send updates to a central server for aggregation, making the systems susceptible to poisoning attacks due to the lack of central server visibility. Existing client selection (CS) techniques enhance global accuracy but lack robustness, especially with non-Independently and Identically Distributed (non-IID) data patterns. To address this, our study fortifies CS, emphasizing federated learning's resilience. Specifically, to enhance robustness against poisoning attacks, we integrate a reputation system with Adversarial Multi-Armed Bandit (MAB) algorithms for improved model aggregation. Framing the CS problem as an adversarial MAB problem, our approach effectively estimates each client's reputation, mitigating uncertainties in current reputation values. It establishes a regret bound, showcasing sublinear regret, a desirable characteristic in online learning algorithms. Through experiments on a publicly available dataset, our approach achieves an impressive 94.2% success rate in identifying trustworthy clients.
Speaker Biniyam Deressa
PhD candidate at the University of Waterloo, with a focus on security and privacy-preserving federated learning. My research centers on optimizing federated learning methods and enhancing their resilience against adversarial clients.

Enter Zoom
Session FOGML-S2

FOGML 2024 – Session 2: Fast and Adaptive Distributed Machine Learning

Conference
10:30 AM — 11:30 AM PDT
Local
May 20 Mon, 1:30 PM — 2:30 PM EDT
Location
Plaza A

Federated Learning-Based Cooperative Model Training for Task-Oriented Semantic Communication

Haofeng Sun (Beijing University of Posts and Telecommunications, China); Hui Tian (Beijng University of posts and telecommunications, China); Wanli Ni (Tsinghua University, China); Jingheng Zheng (Beijing University of Posts and Telecommunications, China)

0
Considering the computing heterogeneity, devices with insufficient computing resources may increase the model training latency of semantic encoder (SE) and semantic decoder (SD) in the task-oriented communication network. In this paper, we propose a cooperative model training (CMT) framework based on federated learning for the training of SE and SD models among computing-heterogeneous devices. First, we design a resource allocation scheme to reduce the computing and communication latency of the proposed framework. Then, we analyze the convergence performance of our CMT framework in the training absence of computing-weak devices. Numerical experiments validate that the proposed CMT framework outperforms the benchmarks by obtaining a 6% gain in training accuracy and a 30% gain in latency. Moreover, the CMT framework achieves nearly indistinguishable performance with centralized learning while protecting privacy and reducing communication overhead.
Speaker Haofeng Sun
Speaker biography is not available.

Cascade: Enhancing Reinforcement Learning with Curriculum Federated Learning and Interference Avoidance --- A Case Study in Adaptive Bitrate Selection

Salma Emara, Daniel Liu, Fei Wang and Baochun Li (University of Toronto, Canada)

0
Current reinforcement learning (RL) algorithms, particularly RL-based networking algorithms, demonstrate significant potential for overcoming limitations of manually-tuned heuristics. However, RL-based algorithms are known to be sample inefficient and may not perform well in a wide range of environments. Federated reinforcement learning (FRL) aims to enhance sample efficiency and improve model performance across a wide variety of environments. Nevertheless, many existing approaches neglect the challenges posed by the dynamic nature of training sample distributions in RL and the heterogeneity of data across clients in FRL. This may restrain the broader applicability of FRL algorithms. Addressing these gaps, we propose Cascade, the first curriculum federated reinforcement learning framework with interference avoidance, and we study Cascade in the context of RL-based adaptive bitrate (ABR) selection algorithms. To eliminate interference between two or more interfering tasks from different clients in FRL, we propose an interference avoidance technique that penalizes changes to model parameters important for other clients. We extensively evaluate Cascade in a wide range of network environments. Our experiments show that Cascade outperforms the state-of-the-art federated learning settings by a minimum of 21% in average reward and 15% in model skewness. These findings highlight the efficacy of Cascade and underscore the potential of enhancing FRL.
Speaker Fei Wang
Speaker biography is not available.

Wireless Hierarchical Federated Aggregation Weights Design with Loss-Based-Heterogeneity

Yuchuan Ye and Youjia Chen (Fuzhou University, China); Junnan Yang (University of Technology Sydney, Australia); Ming Ding (Data 61, Australia); Peng Cheng (La Trobe University, Australia & The University of Sydney, Australia); Jinsong Hu and Haifeng Zheng (Fuzhou University, China)

0
Hierarchical federated learning (HFL) in wireless networks significantly saves communication resources thanks to edge aggregation in edge mobile computing (MEC) servers. Considering the spatially correlated data in wireless networks, in this paper, we analyze the performance of HFL with hybrid data distributions, i.e. intra-MEC independent and identically distributed (IID) and inter-MEC non-IID data samples. We also derive the performance impacts of data heterogeneity and global aggregation frequency. Based on our theoretical results, we further propose a novel aggregation weights design with loss-based heterogeneity to accelerate the training of HFL and improve learning accuracy. Our simulations verify the theoretical results and demonstrate the performance gain achieved by the proposed aggregation weights design. Moreover, we find that the performance gain of the proposed aggregation weights design is higher in a high-heterogeneity scenario than in a low-heterogeneity one.
Speaker Yuchuan Ye; Youjia Chen
Speaker biography is not available.

Enter Zoom


Gold Sponsor


Gold Sponsor


Student Travel Grants


Student Travel Grants


Student Travel Grants

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · INFOCOM 2023 · © 2024 Duetone Corp.