Workshops

Session PerAI-6G-Opening

Opening Session

Conference
8:00 AM — 8:10 AM EDT
Local
May 20 Sat, 8:00 AM — 8:10 AM EDT

Opening Session

Wen Wu (Peng Cheng Laboratory, P.R. China)

0
TBD
Speaker Wen Wu (Peng Cheng Laboratory)

Dr. Wen Wu earned the Ph.D. degree in Electrical and Computer Engineering from University of Waterloo, Waterloo, ON, Canada, in 2019. He received the B.E. degree in Information Engineering from South China University of Technology, Guangzhou, China, and the M.E. degree in Electrical Engineering from University of Science and Technology of China, Hefei, China, in 2012 and 2015, respectively. He worked as a Post-doctoral fellow with the Department of Electrical and Computer Engineering, University of Waterloo. He is currently an Associate Researcher at the Peng Cheng Laboratory, Shenzhen, China. His research interests include 6G networks, pervasive network intelligence, digital twin, and network virtualization. 


Session Chair

Wen Wu (Peng Cheng Laboratory, P.R. China)

Session PerAI-6G-Keynote-1

Keynote Session 1

Conference
8:10 AM — 9:00 AM EDT
Local
May 20 Sat, 8:10 AM — 9:00 AM EDT

The Future Networks where Communications Meet Artificial Intelligence

Shui Yu (University of Technology Sydney, Australia)

0
TBD
Speaker Shui Yu (University of Technology Sydney, Australia)

Shui Yu is Professor of the School of Computer Science in the Faculty of Engineering and Information Technology at UTS, the Deputy Chair of the UTS Research Committee, and is a researcher of cybersecurity, privacy and the networking, communication aspects of Big Data, and applied mathematics for computer science. He served as a Distinguished Lecturer of IEEE Communications Society (2018-2021). He is a Distinguished Visitor of IEEE Computer Society (2022-2024), a voting member of IEEE ComSoc Educational Services board, and an elected member of Board of Governors of IEEE Communications Society and IEEE Vehicular Technology Society, respectively. He is a Fellow of IEEE.



Session Chair

Ruozhou Yu (North Carolina State University, United States)

Session PerAI-6G-Keynote-2

Keynote Session 2

Conference
9:00 AM — 9:50 AM EDT
Local
May 20 Sat, 9:00 AM — 9:50 AM EDT

Consensus Protocols for IoT Systems with Blockchains

Jelena Misic (Toronto Metropolitan University, Canada)

0
TBD
Speaker
Speaker biography is not available.

Session Chair

Ning Zhang (University of Windsor, Canada)

Session PerAI-6G-Session-1

Session 1: AI applications in 6G

Conference
9:50 AM — 11:20 AM EDT
Local
May 20 Sat, 9:50 AM — 11:20 AM EDT

EMP-GAN: Encoder-Decoder Generative Adversarial Network for Mobility Prediction

Sammy Yap Xiang Bang (Sungkyunkwan University, Korea (South)); Syed Muhammad Raza (Sungkyunkwan Unviersity, Korea (South)); Huigyu Yang (Sungkyunkwan University, Korea (South)); Hyunseung Choo (Sungkyunkwan University, Korea (South))

1
Ultra dense cell deployments in Beyond 5G and 6G results in extensive overlapping between cells. This makes current reactive handover mechanism inadequate due to availability of multiple strong signals at a position. Moreover, recently proposed predictive mobility management schemes are also not suitable as they may lead to unnecessary handovers. A predictive path based mobility management scheme can solve these issues, but forecasting User Equipment (UE) path with high accuracy is a challenging task. This paper proposes Encoder-Decoder Generative Adversarial Network (EMP-GAN) for forecasting multi-step ahead UE path. EMP-GAN architecture consists of generator and discriminator neural networks, where the generator predicts mobility (next multi-step target sequence) and the discriminator classifies between the predicted target sequence and the ground truth in adversarial learning. Besides adversarial learning, feature matching and fact forcing training methods are employed for fast convergence of GAN and performance improvement. EMP-GAN is evaluated on mobility dataset collected from the wireless network of Pangyo ICT Research Center, Korea, and results show that it outperforms state-of-the-art prediction models. In particular, EMP-GAN achieves 95.55%, 94.70%, 93.50%, and 92.39% accuracies for 3, 5, 7, and 9-step predictions, respectively.
Speaker Sammy Yap Xiang Bang (Sungkyunkwan University)

A graduate student working in the AI networking area.


Content-Aware Semantic Communication for Goal-Oriented Wireless Communications

Yuzhou Fu (Xidian University, China); Wenchi Cheng (Xidian University, China); Wei Zhang (The University of New South Wales, Australia)

0
Semantic communication is a promising candidate to further investigate the potential of the goal-oriented wireless communications. While semantic communications have shown the potential in the case of performing the intelligent tasks, its applications to the multitasking services remain limited. In this paper, we proposed the Content-Aware Semantic Communication (CA-SC) framework to reconstruct the intended data for supporting multitasking. In particular, the CA-SC is based on the attention map to allocate more bit resources for the semantic information that are semantic correlation with the current task, thus achieving the efficient goal-oriented wireless communications. In order to further improve the coding rate as well as reconstructing intended data, we formulate a rate-distortion optimization problem as the loss function for the CA-SC framework, which aims to jointly optimize the semantic codec and the channel codec. Numerical results show that the CA-SC scheme can achieve better performance compared with existing semantic codec scheme and the traditional codec scheme in the image reconstruction and the visual identify tasks.
Speaker Yuzhou Fu (Xidian University)



Deep Reinforcement Learning-Assisted Age-optimal Transmission Policy for HARQ-aided NOMA Networks

Kun Peng Liu (Harbin Institute of Technology (Shenzhen), China); Aimin Li (Harbin Institute of Technology (Shenzhen), China); Shaohua Wu (Harbin Institute of Technology, China)

0
The recent interweaving of AI-6G technologies has sparked extensive research interest in further enhancing reliable and timely communications. Age of Information (AoI), as a novel and integrated metric implying the intricate trade-offs among reliability, latency, and update frequency, has been well-researched since its conception. This paper contributes new results in this area by employing a Deep Reinforcement Learning (DRL) approach to intelligently decide how to allocate power resources and when to retransmit in a freshness-sensitive downlink multi-user Hybrid Automatic Repeat reQuest with Chase Combining (HARQ-CC) aided Non-Orthogonal Multiple Access (NOMA) network. Specifically, an AoI minimization problem is formulated as a Markov Decision Process (MDP) problem. Then, to achieve deterministic, age-optimal, and intelligent power allocations and retransmission decisions, the Double-Dueling-Deep Q Network (DQN) is adopted. Furthermore, a more flexible retransmission scheme, referred to as Retransmit-At-Will scheme, is proposed to further facilitate the timeliness of the HARQ-aided NOMA network. Simulation results verify the superiority of the proposed intelligent scheme and demonstrate the threshold structure of the retransmission policy. Also, answers to whether user pairing is necessary are discussed by extensive simulation results.
Speaker Kunpeng liu (Harbin Institute of Technology(Shenzhen),China)



Enhance Detection of SSVEPs through a Sinusoidal-Referenced Task-Related Component Analysis Method

Zhenyu Wang (Shanghai Advanced Research Institute, Chinese Academy of Sciences, China); Tianheng Xu (Shanghai Advanced Research Institute, Chinese Academy of Sciences, China); Xianfu Chen (VTT Technical Research Centre of Finland, Finland); Ting Zhou (Shanghai University, China); Honglin Hu (Shanghai Advanced Research Institute, China); Celimuge Wu (The University of Electro-Communications, Japan)

0
The brain-computer interface (BCI) technology is deemed a pivotal technology in future wireless communication systems, e.g. 6G, for its capability to connect a brain and a machine. The device, paradigm, and algorithm are three most important aspects of a practical BCI. Among them, the detection algorithm has a decisive impact on the efficiency and robustness of the system. Great potential of artificial intelligence (AI) for decoding brains signals is also revealed. In this paper, we propose a new detection algorithm for the steady-state visual-evoked potential (SSVEP) based BCI, which is a typical noninvasive BCI paradigm and achieves by far the highest information transfer rate (ITR) among various noninvasive systems. The new algorithm is termed sinusoidal-referenced task-related component analysis (srTRCA). It resembles conventional algorithms like TRCA in the way that it is also based on spatial filtering and template matching. However, compared with conventional algorithms like TRCA, srTRCA makes better use of the prior knowledge of the SSVEP signal being sinusoidal. By introducing a new item which characterizes the correlation between the task-related component and sinusoidal reference to its objective function, srTRCA is expected to achieve an enhanced detection performance, especially in the situation where training is insufficient. The performance of srTRCA is tested on a benchmark SSVEP dataset which includes 35 subjects. Three algorithms are taken as baselines with them being canonical correlation analysis (CCA), TRCA, and similarity-constrained TRCA (scTRCA). Results show that srTRCA achieves a fair performance enhancement compared with three baselines. The validity of the proposed srTRCA algorithm is proved.
Speaker Zhenyu Wang

Zhenyu Wang is a assistant professor with Shanghai Advanced Research Institute, his research intrest include brain-computer interface, biomedical signal processing, and machine learning.


Session Chair

Dong Yang (Beijing Jiaotong University, P.R. China)

Session PerAI-6G-Session-2

Session 2: Distributed Learning in 6G

Conference
2:00 PM — 3:30 PM EDT
Local
May 20 Sat, 2:00 PM — 3:30 PM EDT

Wireless Time-triggered Federated Learning with Adaptive Local Training Optimization

Xiaokang Zhou (Harbin Institute of Technology, China); Yansha Deng (King's College London, United Kingdom (Great Britain)); Huiyun Xia (Harbin Institute of Technology, China); Shaochuan Wu (Harbin Institute of Technology, China)

1
Traditional synchronous and asynchronous federated learning suffer from limitations, such as straggler, high communication overhead, and model staleness problems. As a generalization of the former two, time-triggered federated learning (TT-Fed) offers a new perspective to alleviate these problems and provides better trade-off between communication and training efficiencies. In this paper, we consider the dynamic aggregation frequency optimization problem under constrained system resources for TT-Fed. We analyze how the number of local training epochs affects the performance of TT-Fed and quantify the impacts of distributed data heterogeneity and model staleness heterogeneity to its convergence upper bound. Based on the derived upper bound, we propose an adaptive local training optimization algorithm to minimize the system loss under constrained resource budget. Numerical simulations show that compared to TT-Fed with fixed number of local training epochs, our proposed adaptive optimization algorithm can provide near optimal results under different degrees of non-IID data distributions.
Speaker Xiaokang Zhou (Harbin Institute of Technology)



A Fully Distributed Training for Class Incremental Learning in Multihead Networks

Mingjun Dai (Shenzhen University, China); Yonghao Kong (Shenzhen University, China); Junpei Zhong (The Hong Kong Polytechnic University, Hong Kong); Shengli Zhang (Shenzhen University, China); Hui Wang (Shenzhen Institute of Information Technology, China)

0
Due to good elastic scalability, multi-head network is favored in incremental learning (IL). During IL process, the model size of multi-head network continually grows with the increasing number of branches, which makes it difficult to store and train within a single node. To this end, within model parallelism framework, a distributed training architecture together with its pre-requisite is proposed. Based on the assumption that the pre-requisite is satisfied, a distributed training algorithm is proposed. In addition, to avoid the dilemma that prevalent cross-entropy (CE) loss function does not fit distributed setting, a fully distributed cross-entropy (D-CE) loss function is proposed, which avoids information exchange among nodes. Corresponding training based on D-CE is proposed (D-CE-Train). This method avoids model size expansion problem in centralized training. It employs distributed implementation to speed up training, and reduces the interaction between multiple nodes that may significantly slow down the training. A series of experiments verify the effectiveness of the proposed method.
Speaker Yonghao Kong (Shenzhen University, China)



A Novel Hierarchically Decentralized Federated Learning Framework in 6G Wireless Networks

Jie Zhang (University of Science and Technology of China, China); Li Chen (University of Science and Technology of China, China); Xiaohui Chen (University of Science and Technology of China, China); Guo Wei (University of Sci. & Tech. of China, China)

0
Decentralized federated learning(DFL) architecture enables clients to collaboratively train a shared machine learning model without a central parameter server. However, it is difficult to apply in multicell scenarios. In this paper, we propose an integrated hierarchically decentralized federated learning(HDFL) framework, where devices from different cells collaboratively train a global model under periodically intra-cell D2D consensus and inter-cell aggregation. We establish strong convergence guarantees for the proposed HDFL algorithm without assuming convex objectives. The convergence rate of HDFL can be optimized to achieve the balance of model accuracy and communication overhead. To improve the wireless performance of HDFL, we formulate an optimization problem to minimize the training latency and energy overhead. Numerical results based on the CIFAR-10 dataset validate the superiority of HDFL over traditional DFL methods in the multicell scenario.
Speaker Jie Zhang (University of Science and Technology of China)



Multi-Agent Distributed Cooperative Routing for Maritime Emergency Communication

Tingting Yang (Dalian Maritime University, China); Yujia Huo (Dalian Maritime University, China); Chengzhuo Han (Southeast University, China); Xin Sun (Dalian Maritime University, China)

0
With the development of intelligent shipping industry, massive terminal device is connected to maritime network, which causes the centralized scheduling mechanism fails to meet the communication requirements of large-scale network. Meanwhile, constant location changes of vessels make it difficult to acquire the optimal route planning with existing routing schemes. Therefore, enlightened by the success of multi-agent reinforcement learning (MARL), we propose a multi-agent distributed cooperative routing algorithm driven by maritime emergency communication task. The algorithm utilizes the calculation results of adjacent agents to train local model, which can alleviate the coupling between individual agent and global data. For large network with small-scale topological changes, we leverage online learning mechanism for local training to ensure the accuracy of routing decision. The simulation results demonstrate that the proposed routing algorithm could not only avoid congestion, but substantially reduce retraining time, overhead communication cost and high compute consumption brought by small-scale topological dynamic changes. The corresponding open-source repository is shared on Github.
Speaker Yujia Huo(Dalian Maritime university)



Session Chair

Yu Cheng (Illinois Institute of Technology, United States)

Session PerAI-6G-Session-3

Session 3: Digital Twin in 6G

Conference
3:30 PM — 4:30 PM EDT
Local
May 20 Sat, 3:30 PM — 4:30 PM EDT

Digital Twin Enabled Intelligent Network Orchestration for 6G: A Dual-Layered Approach

Pengyi Jia (Western University, Canada); Xianbin Wang (Western University, Canada); Sherman Shen (University of Waterloo, Canada)

0
Meeting diverse service requirements concurrently in the future 6th-generation (6G) networks is becoming extremely challenging due to the dramatically increased network and service complexities. Overcoming this challenge relies on accurate and timely understanding of network dynamics and distributed service requirements. However, the excessive processing delay and resource consumption associated with network-wide information gathering and centralized processing inevitably deteriorate the network management performance. To overcome this challenge, a dual-layered digital twin paradigm is proposed in this paper to enable intelligent network orchestration for fast real-time optimization of 6G networks. Rapid identification of problematic network situations and the corresponding fast decision-making in overcoming the related issues are successively achieved by the two layers of the proposed digital twin. The new dual-layered digital twin is further adopted for traffic engineering in 6G networks with maximized quality of service provisioning. Simulation results demonstrate that the dual-layered digital twin paradigm can intelligently achieve accurate and situation-aware digital twin construction and network optimization with enhanced efficiency.
Speaker Pengyi Jia (Western University)

Pengyi jia received his Ph.D. degree from the Department of Electrical and Computer Engineering, Western University, London, ON, Canada, in 2021. He is currently a Postdoctoral Associate at Western University.

 

His research interests include intelligent network synchronization, digital twin, and machine learning, as well as their applications in vertical IoT systems, cellular networks, and advanced manufacturing. One focus of his recent research is to develop accurate and efficient digital twin paradigms by investigating the intrinsic temporal correlations behind the massive sampling data to support network optimization and application-oriented service provisioning.


Multi-Agent Deep Reinforcement Learning for Digital Twin over 6G Wireless Communication in the Metaverse

Wenhan Yu (Nanyang Technological University, Singapore); Terence Jie Chua (Nanyang Technological University, Singapore); Jun Zhao (Nanyang Technological University, Singapore)

0
Technology advancements in wireless communications and high-performance Extended Reality (XR) have empowered the developments of the Metaverse. The demand for Metaverse applications and hence, real-time digital twinning of real-world scenes is increasing. Nevertheless, the replication of 2D physical world images into 3D virtual world scenes is computationally intensive and requires computation offloading. The disparity in transmitted scene dimension (2D as opposed to 3D) leads to asymmetric data sizes in uplink (UL) and downlink (DL). To ensure the reliability and low latency of the system, we consider an asynchronous joint UL-DL scenario where the smaller data size of the physical world scenes captured in the UL will be uploaded to the Metaverse Console (MC) to be construed and rendered, and the larger-size 3D virtual world scenes need to be transmitted back in DL. The decisions pertaining to computation offloading and channel assignment are optimized in the UL stage, and the MC will optimize power allocation for users assigned with a channel in the UL transmission stage. To ensure the reliability and low latency of the system, we design a novel advantage actor-critic structure, namely Asynchronous Actors Hybrid Critic (AAHC). The simulation experiments demonstrate that compared to the proposed baselines, AAHC obtains better solutions with a preferable training time.
Speaker Wenhan Yu (Nanyang Technological University)

Wenhan Yu received his B.S. degree from Sichuan University, Sichuan, China in 2021. He is currently pursuing the Ph.D. degree in School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore, supervised by Dr. JunZhao. His research interests cover wireless communications, deep reinforcement learning, optimization, and Metaverse.


Machine Learning Assisted Capacity Optimization for B5G/6G Integrated Access and Backhaul Networks

Oluwaseun Ajayi (Illinois Institute of Technology, USA); Shuai Zhang (Illinois Institute of Technology, USA); Yu Cheng (Illinois Institute of Technology, USA)

0
The cross-layer design on the routing of traffic and scheduling of wireless backhaul links in the beyond 5G (B5G)/6G integrated access and backhaul (IAB) networks has continued to draw attention, owing to the stringent requirement of ultra-reliable low latency communication (URLLC). Efforts to split the total bandwidth or allocate a static partition to some backhaul links have not improved the sum rate, and the use of approximation algorithms for resource scheduling in the shared spectrum have posed significant computation overhead. In this paper, we propose a two-stage machine learning (ML) framework for capacity optimization, where the scheduling structure of past optimization instances in a simulated IAB multi-hop network are explored and exploited to accelerate the solution of a new optimization instance with linear programming. We evaluate the ML method on different multi-commodity flow (MCF) deployments in uplink and downlink, and achieved up to 94% average throughput in comparison with the delayed column generation (DCG) benchmark algorithm. Further to that, our ML method significantly reduces the computation time by at least 95%.
Speaker Oluwaseun Ajayi (Illinois Institute of Technology)

Oluwaseun T. Ajayi received the BSc. degree in Telecommunication Science from the University of Ilorin, Nigeria, in 2018. He is currently pursuing his Ph.D degree at Illinois Institute of Technology. His research interests include machine learning in wireless networks, information freshness optimization, and vehicular communications.


Session Chair

Jie Gao (Carleton University, Canada)

Session PerAI-6G-Session-4

Session 4: Resource Management in 6G

Conference
4:30 PM — 5:50 PM EDT
Local
May 20 Sat, 4:30 PM — 5:50 PM EDT

Towards AI-driven On-demand Routing in 6G Wide-Area Networks

Bin Dai (Huazhong University of Science and Technology, China); WenRui Huang (China); XinBin Shi (Huazhong University of Science and Technology, China); MengDa Lv (Huazhong University of Science and Technology, China); Yijun Mo (Huazhong University of Science and Technology, China)

1
In upcoming sixth generation (6G) networks, it is a critical challenge to support a plethora of innovative services across wide-area networks. To realize the dedicated QoS provisioning and meet the diverse quality of service (QoS) requirements of services in terms of criteria like bandwidth, delay, jitter and loss ratio, we proposed an AI-driven on-demand routing framework to support the diverse end-to-end QoS Provisioning in large-scale wide-area networks. Specifically, we make further efforts on solving the instability and non-convergence issues of the AI-driven routing algorithm and enhance it with the assistance of expert knowledge on traffic engineering and the latest advance on reinforcement learning. Furthermore, the simulation results show that our algorithm outperforms other benchmark routing algorithms with efficient learning and a significant reduction in delay, jitter and loss ratio by the traffic data sets of the real-world wide-area networks.
Speaker Wenrui Huang(HUST)



Learning-Aided Multi-UAV Online Trajectory Coordination and Resource Allocation for Mobile WSNs

Lu Chen (Shenzhen University, China); Suzhi Bi (Shenzhen University, China); Xiaohui Lin (Shenzhen University, China); Zheyuan Yang (The Chinese University of Hong Kong, Hong Kong); Yuan Wu (University of Macau, Macao); Qiang Ye (Memorial University of Newfoundland, Canada)

0
In this paper, we consider a multi-UAV enabled wireless sensor network (WSN) where multiple unmanned aerial vehicles (UAVs) gather data from multiple randomly moving sensor nodes (SNs). We aim to minimize the long-term average energy consumption of all SNs while satisfying their average data rate requirements and energy constraints of the UAVs. We solve the problem by jointly optimizing the multi-UAV's trajectories, communication scheduling and SN's association decisions. In particular, we formulate it as a multi-stage stochastic mixed integer non-linear programming (MINLP) problem and design an online algorithm that integrates Lyapunov optimization and deep reinforcement learning (DRL) methods. Specifically, we first decouple the original multi-stage stochastic MINLP problem into a series of per-slot deterministic MINLP subproblems by applying Lyapunov optimization. For each per-slot problem, we use model-free DRL to obtain the optimal integer UAV-SN associations and model-based method to optimize the UAVs' trajectories and resource allocation. Simulation results reveal that although the communication environments change stochastically and rapidly, our proposed online algorithm can produce real-time solution that achieves high system performance and satisfies all the constraints.
Speaker Lu Chen (Shenzhen University)



Joint Power Control and Bandwidth Allocation for UAV-Assisted Integrated Communication and Localization Networks

Xiaoman Li (Beijing University of Posts and Telecommunications, Beijing, China); Li Wang (Beijing University of Posts and Telecommunications, China); Liang Li (Beijing University of Posts and Telecommunications Beijing, China); Lianming Xu (Beijing University of Posts and Telecommunications, China); Aiguo Fei (Beijing University of Posts and Telecommunications, China)

0
Emergency scenarios, e.g., earthquakes or forest fires, typically call for reliable communication and accurate localization for efficient rescue, and UAVs regarded as flying base stations and wireless localization anchors are suitable to provide both services. One critical challenge is to guarantee the performances of both communication and localization relying on limited radio resources. In this paper, we target at orchestrating the resource provisioning for UAV-assisted communication and localization networks via flexible power control and bandwidth allocation. To this end, we establish an integrated communication-localization resource allocation problem to maximize the communication capacity and localization accuracy, subjecting to the limited power and bandwidth constraints. To solve this mixed-integer non-convex optimization problem, we propose a joint power control and bandwidth allocation algorithm integrating PSO and greedy algorithm to optimize transmit power and bandwidth, respectively. Simulation results show that our approach achieves a 23% increase in data transmission rate and a 65% decrease in localization error over the average allocation scheme.
Speaker Xiaoman Li(Beijing University of Posts and Telecommunications, Beijing, China)



Combined Bulk and Per-subcarrier Relay Selection Enabled by Cascaded Neural Computing

Wenwu Li (Guangxi University, China, China); Shuping Dang (University of Bristol, United Kingdom (Great Britain)); Zhenrong Zhang (Guangxi University, China, China); Zhihui Ge (Guangxi University, China, China)

0
Cooperative relaying has been recognized as a key technology of sixth generation (6G) networks. It can enhance the flexibility of network deployment and improve coverage. Multicarrier relay selection is required to efficiently allocate spatial resources in multi-carrier cooperative networks. To effectively reduce the end-to-end latency in multi-carrier cooperative networks, two basic multi-carrier relay selection strategies have been enabled by machine learning, i.e., bulk relay selection, and per-subcarrier relay selection. Nevertheless, bulk relay selection enabled by machine learning may lead to a low coding gain. Per-subcarrier relay selection enabled by machine learning may lead to a complex artificial neural network (ANN) structure when there are many relays in future wireless networks. To balance the complexity and performance, we design a newly cascaded ANN (CANN) to perform the combined bulk and per-subcarrier relay selection scheme. After training the CANN, we demonstrate through simulations that the proposed method can effectively achieve a compromise between complexity and performance.
Speaker Wenwu Li Guangxi University



Session Chair

Fang Fang (Western University, Canada)

Session PerAI-6G-Closing

Closing Session

Conference
5:50 PM — 6:00 PM EDT
Local
May 20 Sat, 5:50 PM — 6:00 PM EDT

Closing Session

Fang Fang (Western University, Canada)

0
TBD
Speaker
Speaker biography is not available.

Session Chair

Fang Fang (Western University, Canada)


Gold Sponsor


Gold Sponsor


Bronze Sponsor


Student Travel Grants


Student Travel Grants


Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.