Workshops

Session FOGML-S1

Opening Remarks

Conference
2:00 PM — 2:10 PM EDT
Local
May 20 Sat, 2:00 PM — 2:10 PM EDT

Session Chair

Christopher G. Brinton (Purdue University, USA); Seyyedali Hosseinalipour (University at Buffalo–SUNY, USA); Nicolo Michelusi (Arizona State University, USA); Osvaldo Simeone (King’s College London, UK)

Session FOGML-S2

Federated Learning

Conference
2:10 PM — 4:20 PM EDT
Local
May 20 Sat, 2:10 PM — 4:20 PM EDT

Lay Importance on the Layer: Federated Learning for Non-IID data with Layer-based Regularization

Eleftherios Charteros and Iordanis Koutsopoulos (Athens University of Economics and Business, Greece)

0
A prime challenge in Machine-Learning model training with Federated Learning (FL) is that of non-independent and identically distributed (Non-IID) data among clients. Approaches based on the popular Federated Averaging (FedAvg) algorithm struggle with Non-IID data, while the Federated Proximation (FedProx) method, which introduces a regularization term in the local training loss function lacks the convergence speed of FedAvg. In this paper, we introduce Federated Learning with Layer-Adaptive Proximation (FedLap), a new algorithm to tackle non-IID data.

The key idea is a new regularization term in the local loss function, which generalizes that of FedProx and captures divergence between global and local model weights of each client at the level of Deep Neural Network (DNN) layers. That is, the weights of different layers of the DNN are treated differently in the regularization function. Divergence between the global and local models is captured through a dissimilarity metric and a distance metric, both applied to each DNN layer. Since regularization is applied per layer and not universally to all weights as in FedProx, during local updates, only the weights of some layers that drift away from the global model are fine-tuned, while other weights are not affected. We verify the superior performance of FedLap over FedAvg and FedProx in terms of accuracy and convergence speed with different datasets, in settings with Non-IID data and unstable client participation. FedLap achieves 3-5% higher accuracy in the first 20 communication rounds compared to FedAvg and FedProx, while it achieves up to 10% higher accuracy in cases of unstable client participation.
Speaker Iordanis Koutsopoulos (Athens University of Economics and Business)

Iordanis Koutsopoulos is a Professor with the Department of Informatics of Athens University of Economics and Business, Athens, Greece.


Federated Learning at the Edge: An Interplay of Mini-batch Size and Aggregation Frequency

Weijie Liu and Xiaoxi Zhang (Sun Yat-sen University, China); Jingpu Duan (Peng Cheng Laboratory, China); Carlee Joe-Wong (Carnegie Mellon University, USA); Zhi Zhou and Xu Chen (Sun Yat-sen University, China)

0
Federated Learning (FL) is a distributed learning paradigm that can coordinate heterogeneous edge devices to perform model training without sharing private raw data. Prior works on the convergence analysis of FL have focused on mini-batch size and aggregation frequency separately. However, increasing the batch size and the number of local updates can differently affect model performance and system overhead. This paper proposes a novel model in quantifying the interplay of FL mini-batch size and aggregation frequency to navigate the unique trade-offs among convergence, completion time, and resource cost. We obtain a new convergence bound for synchronous FL with respect to these decision variables under heterogeneous training datasets at different devices. Based on this bound, we derive closed-form solutions for co-optimized mini-batch size and aggregation frequency, uniformly among devices. We then design an efficient exact algorithm to optimize heterogeneous mini-batch configurations, further improving the model accuracy. An adaptive control algorithm is also proposed to dynamically adjust the batch sizes and the number of local updates per round. Extensive experiments demonstrate the superiority of our offline optimized solutions and online adaptive algorithm.
Speaker Weijie Liu (Sun Yat-sen University )

Weijie Liu received the B.E. degree in electronics and communication engineering from the Sun Yat-sen University in 2021. He is currently working toward the M.E. degree at the School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China. His research interests include federated learning and edge computing.


Anarchic Convex Federated Learning

Dongsheng Li and Xiaowen Gong (Auburn University, USA)

0
The rapid advances in federated learning (FL) in the past few years have recently inspired a great deal of research on this emerging topic. Existing work on FL often assume that clients participate in the learning process with some particular pattern (such as balanced participation), and/or in a synchronous manner, and/or with the same number of local iterations, while these assumptions can be hard to hold in practice. In this paper, we propose AFLC, an Anarchic Federated Learning algorithm for Convex learning problems, which gives maximum freedom to clients. In particular, AFLC allows clients to 1) participate in arbitrary rounds; 2) participate asynchronously; 3) participate with arbitrary numbers of local iterations. The proposed AFLC algorithm enables clients to participate in FL efficiently and flexibly according to their needs, e.g., based on their heterogeneous and time-varying computation and communication capabilities. We characterize performance bounds on the learning loss of AFLC as a function of clients' local model delays and local iteration numbers. Our results show that the convergence error can be made arbitrarily small by choosing appropriate learning rates, and the convergence rate matches that of existing benchmarks. The results also characterize the impacts of clients' various parameters on the learning loss, which provide useful insights. Numerical results demonstrate the efficiency of the proposed algorithm.
Speaker Dongsheng Li (Auburn University)

He is currently a Ph.D. student in Electrical and Computer Engineering at Auburn University, USA. His main research interests include federated learning and wireless communication scheduling. 


Efficient Communication-assisted Sensing based on Federated Transfer Learning

Wenjiang Ouyang, Mu Junsheng and Jingyan Wu (Beijing University of Posts and Telecommunications, China); Bohan Li (University of Southampton, United Kingdom (Great Britain)); Xiao jun Jing (Beijing University of Posts and Telecommunications, China)

0
Integrated sensing and communication (ISAC) has been an attractive solution to address the issues of spectrum shortage and hardware overhead through the coexistence of sensing and communication function. In ISAC, one of the most meaningful topics is communication-assisted sensing, namely, sensing-centered ISAC, where the sensing performance is enhanced by the improvement of communication efficiency between multiple sensing nodes. As a classic architecture of distributed sensing, federated learning (FL) based communication-assisted sensing is continuously discussed. Note that the huge number of parameters in deep neural network (DNN) applied to FL brings a heavy burden on the communication link between sensing node and central server, which even leads to the disconnecting of training clients. In this paper, a novel federated transfer learning (FTL) framework is proposed to address the above issue. More specifically, a task-general sub-model of DNN is pre-trained as feature extractor to accelerate learning and only the task-specific sub-model is aggregated to reduce the communication overhead in the framework. The performance of our proposed scheme is evaluated and the simulation results demonstrate that our algorithm outperforms the benchmark in terms of communication overhead and average test accuracy.
Speaker Wenjiang Ouyang(Beijing University of Posts and Telecommunications)

Wenjiang Ouyang received master degrees in information and communication engineering from the Beijing University of Posts and Telecommunications in 2022. At present, he is currently pursuing his doctor degree at Beijing University of Posts and Telecommunications. His research interests include ISAC, wireless communication and artificial intelligence.


Session Chair

Christopher G. Brinton (Purdue University, USA)

Session FOGML-S3

Distributed Learning and Edge Computing

Conference
4:20 PM — 6:00 PM EDT
Local
May 20 Sat, 4:20 PM — 6:00 PM EDT

Uplink Power Control for Extremely Large-Scale MIMO with Multi-Agent Reinforcement Learning and Fuzzy Logic

Ziheng Liu, Zhilong Liu and Jiayi Zhang (Beijing Jiaotong University, China); Huahua Xiao (ZTE Corporation, China); Bo Ai (Beijing Jiaotong University & State Key Lab of Rail Traffic Control and Safety, China); Derrick Wing Kwan Ng (University of New South Wales, Australia)

0
In this paper, we investigate the uplink transmit power optimization problem in cell-free (CF) extremely large-scale multiple-input multiple-output (XL-MIMO) systems. Instead of applying the traditional methods, we propose two signal processing architectures: the centralized training and centralized execution with fuzzy logic as well as the centralized training and decentralized execution with fuzzy logic, respectively, which adopt the amalgamation of multi-agent reinforcement learning (MARL) and fuzzy logic to solve the design problem of power control for the maximization of the system spectral efficiency (SE). Furthermore, the uplink performance of the system adopting maximum ratio (MR) combining and local minimum mean-squared error (L-MMSE) combining is evaluated. Our results show that the proposed methods with fuzzy logic outperform the conventional MARL-based method and signal processing methods in terms of computational complexity. Also, the SE performance under MR combining is even better than that of the conventional MARL-based method.
Speaker Ziheng Liu (Beijing Jiaotong University, China)



Information Recycling Assisted Collaborative Edge Computing for Distributed Learning

Wanlin Liang (Wuhan University, China); Tianheng Li (WuHan University, China); Xiaofan He (Wuhan University, China)

0
The ever-increasing scale and complexity of artificial intelligent services have ignited the recent research interests in distributed edge learning. For better communication rate and spectral efficiency, non-orthogonal transmissions are often adopted for distributed edge learning. On the other hand, the computation rate of distributed edge earning is sometimes hampered by a few straggling edge nodes (ENs) and existing countermeasures either introduce redundant computation or require extra data retransmission. To the best of our knowledge, developing a new edge computing scheme for distributed learning that can handle EN straggling without these extra costs still remains open. Fortunately, it is found in this work that this computation issue can be addressed jointly with the communication issue by integrating a novel information recycling mechanism into existing non-orthogonal transmission techniques. In particular, an information recycling assisted collaborative edge computing scheme is proposed in this work for distributed learning, which allows each EN to recycle part of the task information intended for other ENs for free, by exploiting the successive interference cancellation (SIC) procedure in non-orthogonal transmission. In this way, faster ENs can help execute part of the workload of the straggling ENs without redundant computation and data retransmission. Besides, to optimize the corresponding total throughput of the distributed edge learning system, a joint power control and rate splitting algorithm is developed. Simulations are conducted to corroborate the effectiveness of the proposed scheme.
Speaker Wanlin Liang (Wuhan University)



Coding-Aware Rate Splitting for Distributed Coded Edge Learning

Tianheng Li (WuHan University, China); Jingzhe Zhang and Xiaofan He (Wuhan University, China)

0
Driven by the explosive escalation of machine learning applications, considerable efforts have been devoted to distributed edge learning. To alleviate the so-called straggling issue, coded computing that injects elaborate redundancy into computation emerges as a promising solution, which in turn ignites the recent research interests in distributed coded edge learning. Albeit effectively mitigating straggling, coded edge computing brings new challenges in communications. In particular, existing transmission schemes are mainly designed for conventional distributed edge learning, where the data offloaded to different edge nodes (ENs) are non-overlapping. They cannot achieve the best performance when applied directly to distributed coded edge learning, due to the redundancy among the data for different ENs in the coded settings. To the best of our knowledge, a tailor-designed transmission scheme for distributed coded edge learning still remains open. With this consideration, a novel coding-aware rate splitting scheme is proposed in this work, which splits the data to different ENs in a coding-aware way to avoid transmission redundancy and enables multiple simultaneous multi-casts to the ENs. To minimize the overall processing latency, an iterative optimization algorithm is developed based on the concave-convex procedure (CCCP) framework. Simulations demonstrate that the proposed scheme can substantially reduce the overall latency of distributed coded edge computing as compared to the baselines.
Speaker Tianheng Li (Wuhan University)



Pragmatic Communication: Bridging Neural Networks for Distributed Agents

Tianhao Guo (Shanxi Universitiy, China & Xidian University, China)

0
In this paper, an intelligence-to-intelligence communication design with a language generation scheme is studied. The concepts and features of pragmatics and pragmatic communication are first discussed and defined from a linguistic point of view: intelligence-to-intelligence communication in a certain environment, using task performance as the evaluation criterion, with the inputs of the goal and the construction of the environment, and the output of task completion. Then, we propose the ``glue neural layer" (GNL) design to bridge two intelligence to form a deeper neural network for effective and efficient communication training. Based on the design of GNL, we shed light on the thoughts about the relationship between the structure of languages and neural networks. Furthermore, a neuromorphic framework of pragmatic communication is proposed to find a base for further discussion. Experiments show that GNL design can dramatically change performance. Finally, the advantage of pragmatic and several open research problems are discussed.
Speaker Tianhao Guo (Shanxi University&Xidian University)

Tianhao Guo (Member, IEEE) received the Ph.D. degree from the School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK in 2020. He is currently a lecturer in Shanxi University. He is also a postdoctoral research associate with the National Key Laboratory of Integrated Services Networks (ISN), Xidian University, China. His research interests include semantic and pragmatic communication, reconfigurable

intelligent surface-aided joint communication and sensing in coal mines, deep learning technologies for

wireless communications, etc. He is also serving as a review editor in Frontiers in Space Technologies and Frontiers in Computer Science. He has served as workshop co-chair of the ICSINC Big Data Workshop 2021. He has reviewed papers from IEEE Wireless Communications and Wireless Communications and Mobile Computing. 


Session Chair

Seyyedali Hosseinalipour (University at Buffalo–SUNY, USA)


Gold Sponsor


Gold Sponsor


Bronze Sponsor


Student Travel Grants


Student Travel Grants


Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.