Workshops

Session PerAI-6G-OS

PerAI6G 2024 – Opening Session

Conference
8:00 AM — 8:05 AM PDT
Local
May 20 Mon, 11:00 AM — 11:05 AM EDT
Location
Regency C/D

Enter Zoom
Session PerAI-6G-KS1

PerAI6G 2024 – Keynote Session 1

Conference
8:05 AM — 8:40 AM PDT
Local
May 20 Mon, 11:05 AM — 11:40 AM EDT
Location
Regency C/D

Enter Zoom
Session PerAI-6G-KS2

PerAI6G 2024 – Keynote Session 2

Conference
8:40 AM — 9:15 AM PDT
Local
May 20 Mon, 11:40 AM — 12:15 PM EDT
Location
Regency C/D

Enter Zoom
Session PerAI-6G-S1

PerAI6G 2024 – Session 1: Distributed Learning in 6G

Conference
9:20 AM — 11:00 AM PDT
Local
May 20 Mon, 12:20 PM — 2:00 PM EDT
Location
Regency C/D

Designing Robust 6G Networks with Bimodal Distribution for Decentralized Federated Learning

Xu Wang and Yuanzhu Chen (Queen's University, Canada); Octavia A. Dobre (Memorial University, Canada)

0
Integrating distributed machine learning with 6G technology is aligned with the United Nations' Sustainable Development Goals, notably in global connectivity and sustainable industrial development. This combination bolsters innovation, contributing significantly to objectives in industry and infrastructure. Large networks are often modeled as variants or compounds of the random or power-law graph. Yet, either random or power-law network presents distinct vulnerabilities -- random networks are susceptible to failures, while power-law networks are more prone to targeted attacks. To address this issue, we propose to create the network topology based on a bi-modal degree distribution so that the network is robust against both types of node removals. Such a design features one central hub with a high degree of connections and other nodes having consistently lower degrees. The resilience of this hub-and-spoke configuration against random failures is clear, especially given the small chance of the central hub being impacted. In contrast of a targeted attack, despite the significant risk of losing the hub, the network effectively withstands further node removals thanks to their residual connections. Simulation experiments in decentralized federated learning show that the developed large 6G network topology is resilient to both random failures and targeted attacks.
Speaker
Speaker biography is not available.

Distributed Link Heterogeneity Exploitation for Attention-Weighted Robust Federated Learning in 6G Networks

Qiaomei Han (Western Univiersity, Canada); Xianbin Wang (Western University, Canada); Weiming Shen (State Key Lab of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, China)

0
The rapid evolution of wireless communications is paving the way for distributed computing, enabling pervasive intelligence in 6G networks through distributed machine learning particularly federated learning (FL). Despite the dramatically enhanced connectivity expected from 6G, imperfect or heterogeneous communication links among distributed participating devices remain as a fundamental challenge for the performance improvement of FL. In overcoming the communication constraint, existing solutions primarily focus on reducing communication overhead through techniques like FL model compression or sparsification. However, these approaches often ignore the impact of link heterogeneity among distributed devices on FL performance. To bridge this gap, we propose a heterogeneous link attention-weighted FL framework in 6G networks through the characterization of link heterogeneity and the design of an attention mechanism-driven model aggregation method. Specifically, leveraging prior knowledge about distributed communication links and their performance metrics such as latency, reliability, and data rate, the FL server calculates the joint performance dissimilarities among these links, thereby characterizing their heterogeneity relations as a binary probabilistic matrix. Subsequently, an attention mechanism is employed for FL model aggregation, where the generated attention weights represent the degree to which each link is influenced by other links. Therefore, the obtained global FL performance can be ensured. Experimental results further demonstrate that our proposed FL framework and model aggregation approach robustly handle FL under the impact of link heterogeneity and optimize the learning performance of FL.
Speaker
Speaker biography is not available.

Decentralized Federated Learning Under Free-riders: Credibility Analysis

Long Zhang, Shuang Qin, Gang Feng and Youkun Peng (University of Electronic Science and Technology of China, China)

0
With aggregator-free control, blockchain-assisted federated learning (BFL) is acknowledged as an effective decentralized paradigm to address the pitfalls of the conventional centralized FL framework, such as single server failures, client dropouts, etc. However, the integration of FL and blockchain is inherently vulnerable to free-rider attack, i.e., attackers disguise themselves to participate in FL without actual contribution to the model training. To well exploit BFL in real applications, it is necessary to quantitatively analyze the impact of such attack on the credibility of BFL. In this paper, we first carefully examine the fundamental characteristics and limitations of FL and blockchain within the underlying physical network infrastructure. In BFL, the delay incurred in model training, blockchain service and model transmission between clients significantly affects the free-riding success probability. Then, we analyze the delay experienced by transmitting and validating a model in BFL with and without free-riders based on queuing theory. Then, we theoretically derive the free-riding success probability with free-riders. Finally, theoretical analysis and simulation experiments show that the system suffers considerable performance degradation when subjected to free-rider attack, and the dominant factors affecting BFL are identified.
Speaker
Speaker biography is not available.

Two-Timescale Energy Optimization for Wireless Federated Learning

Jinhao Ouyang and Yuan Liu (South China University of Technology, China); Hang Liu (Cornell University, USA)

0
Federated learning (FL) enables distributed devices to train a shared machine learning (ML) model collaboratively while protecting their data privacy. However, the limited radio-and-computational resources of mobile devices pose performance bottlenecks to deploy FL over wireless networks. In this paper, we consider model parameter freezing and power control to address these issues. First, we analyze the impact of model parameter freezing and unreliable transmission on the convergence rate. Next, we formulate a two-timescale optimization problem of parameter freezing percentage and transmit power to minimize the model convergence error subject to the energy budget. To solve this problem, we decompose it into parallel sub-problems and decompose each sub-problem into two different timescales problems using the Lyapunov optimization method. The optimal parameter freezing and power control strategies are derived in an online fashion. Experimental results demonstrate the superiority of the proposed scheme compared with the benchmark schemes.
Speaker
Speaker biography is not available.

Multi-User Interaction Experience Optimization in the Metaverse

Zhongxin Cao, Wenqian Zhang and Bo Zhang (Zhengzhou University, China); Rui Yin (Zhejiang University, China); Celimuge Wu (The University of Electro-Communications, Japan); Xianfu Chen (VTT Technical Research Centre of Finland, Finland)

0
The Metaverse is an emerging research and business domain that aims to create a seamless integration of the physical and digital worlds. However, the current network infrastructure poses significant challenges for achieving smooth and immersive interaction in the Metaverse. In this paper, we address the problem of resource management for multi-user interaction experience optimization. We consider a scenario where users generate various types of interaction requests in the Metaverse. The Metaverse service provider (MSP) monitors the requests and accordingly, assigns the resources to the users. To capture the stochastic nature of user requests and the different interac- tion experience tolerance levels, we propose a novel quality-of- experience (QoE) metric and seek to maximize the user QoE by optimizing resource management. We formulate this problem as a Markov decision process (MDP) with the objective of maximizing the expected long-term user QoE. We further develop a proximal policy optimization (PPO) algorithm to solve the MDP problem. The experimental results show that the obtained PPO algorithm can significantly improve the user QoE in the Metaverse.
Speaker Zhongxin Cao
Speaker biography is not available.

Enter Zoom
Session PerAI-6G-S2

PerAI6G 2024 – Session 2: Reinforcement Learning for 6G

Conference
11:10 AM — 12:30 PM PDT
Local
May 20 Mon, 2:10 PM — 3:30 PM EDT
Location
Regency C/D

The Curse of (Too Much) Choice: Handling combinatorial action spaces in slice orchestration problems using DQN with coordinated branches

Pavlos Doanis (EURECOM, France); Thrasyvoulos Spyropoulos (Technical University of Crete, Greece & Eurecom, France)

0
One of the prominent problems in envisioned 6G networks is the truly dynamic placement of multiple virtual network function chains on top of the physical network infrastructure. Reinforcement Learning based schemes have been recently explored for such problems. Yet these have to deal with astronomically high state and action spaces in this context. Using a standard Deep Q-Network (DQN) is a common way to effectively deal with state complexity. While the use of independent DQN (iDQN) agents could be further used to mitigate action space complexity, such schemes often suffer from instability and sample (in)efficiency, and their theoretical performance is hard to assess. To this end we propose a DQN-based scheme that uses a recent Deep Neural Network architecture, with a different branch responsible for the placement of each virtual network function (again reducing action space complexity), yet with (implicit) coordination among branches, via shared layers (hence avoiding iDQN shortcomings). Using a real traffic dataset, we (i) theoretically ground the proposed scheme by comparing it with an optimal online algorithm for a stateless experts environment; (ii) we demonstrate a 41% cost improvement compared the existing state-of-the-art multi-agent DQN approach (independent agents).
Speaker
Speaker biography is not available.

Lyapunov-Based MADRL Policy in Wireless Powered MEC Assisted Monitoring Systems

Xinying Liu and Yuhan Yi (Donghua University, China); Wenqian Zhang (Shanghai Maritime University, China); Guanglin Zhang (Donghua University, China)

0
In real-time monitoring systems, wireless devices(WDs) are employed for information gathering, with age of information(AoI) serving as a crucial measure to assess the timeliness of status information. The combination of wireless power transfer(WPT) and mobile edge computing(MEC) effectively address the challenges posed by the limited battery lifespan and computation capabilities of WDs. This paper focuses on a WP-MEC system where WDs adopt a zero-waiting strategy. A multi-stage stochastic optimization problem is formulated with the goal of maximizing system utility. By utilizing Lyapunov optimization, we convert the problem into deterministic subproblems. However, solving the subproblems proves challenging as they remain mixed-integer nonlinear programming(MINLP) problems. We propose LyMAPPO and LyIPPO to tackle this problem, which leverage the benefits of Lyapunov optimization and multi-agent deep reinforcement learning(MADRL). MADRL enables WDs to make optimal decisions of task scheduling, resource allocation and computation offloading. Simulation results confirm the findings of our theoretical analyses, demonstrating the improved performance of LyMAPPO and LyIPPO in comparison to benchmark algorithms.
Speaker
Speaker biography is not available.

Routing Algorithm Design Based on Deep Reinforcement Learning and GNN

Kaiyuan Zhao and Zinan Zhao (Harbin Institute of Technology, China); ZhenYong Wang (Harbin Institute of Technology & Shenzhen Academy of Aerospace Technology, China); Hongjiang Zhang (China Aerospace Science and Technology Innovation Institute, China)

0
We researched and designed the routing algorithms GDQR and GD3QR based on deep reinforcement learning and graph neural networks, and completed model training and performance testing. We designed the deep reinforcement learning routing algorithm reward function and derived the value learning theory used by the algorithm. Then based on the Markov decision process we designed, we researched and designed the decision-making and training parts of the intelligent routing algorithm, and proposed GDQR routing algorithm. Then, we improved the GDQR algorithm based on Dueling Network and Double DQN(Deep Q Learning) technology and proposed the GD3QR routing algorithm to address the problem of traffic fluctuations having a greater impact on the reward function in routing scenarios and the overestimation problem in value learning. We completed the training of the GDQR and GD3QR algorithms and tested the performance of the two algorithms. The results show that the GDQR and GD3QR algorithms can effectively adjust routing according to the current network status and improve the QoS performance of the entire network in complex traffic scenarios.
Speaker Kaiyuan Zhao
Speaker biography is not available.

Multi-Agent DRL-based Deadline-Aware Routing for Deterministic Wide-Area Networks

Jie Ren, Dong Yang and Weiting Zhang (Beijing Jiaotong University, China); Mingyang Chen (University College London, United Kingdom (Great Britain)); Hongke Zhang (Beijing Jiaotong University, China)

0
The advancement of 6G technologies has sparked interest in the study of low-latency and deterministic communications. A critical challenge for providing an end-to-end deterministic service in 6G is how to overcome the high volume and time-varying properties of wide-area network (WAN) traffic to achieve deterministic support over WANs. In this paper, we propose a deadline-aware framework to support deterministic communication over WANs, where a novel coordinated earliest deadline first scheduler is utilized in the data plane to guarantee timely transmission. Specifically, we formulate the deadline-aware routing problem in the proposed framework as a stochastic optimization problem to maximize the timely delivery ratio. To improve scalability, we propose a deadline-aware routing algorithm based on multi-agent deep reinforcement learning that computes a deadline-satisfied route in a hop-by-hop manner. Simulation results validate that the proposed algorithm can achieve a higher timely delivery ratio, lower latency, and lower jitter compared with benchmark routing algorithms and mainstream deterministic schemes.
Speaker Jie Ren

Jie Ren received the B.S. degree and the M.S. degree in communication and information systems from Beijing Jiaotong University, Beijing, China, in 2018 and 2021, respectively. He is currently working toward with the Ph.D. degree in communication and Information Systems with the Beijing Jiaotong University, Beijing, China. His research interests include time sensitive networks and network architecture.


Enter Zoom
Session PerAI-6G-S3

PerAI6G 2024 – Session 3: Resource Management in 6G

Conference
2:00 PM — 4:00 PM PDT
Local
May 20 Mon, 5:00 PM — 7:00 PM EDT
Location
Regency C/D

Network Slicing for Edge-Cloud Orchestrated Networks via Online Convex Optimization

Kasra Khalafi and Ning Lu (Queen's University, Canada)

0
In this paper, we study a network slicing framework for edge-cloud orchestrated networks. The framework integrates multi-access edge computing (MEC), allowing devices to offload computation-intensive tasks to Radio Access Networks (RAN) at the edge, with the flexibility to process tasks in the cloud based on resource availability and cost. In order to make workload distribution and resource allocation decisions for the edge and cloud resources, an Online Convex Optimization (OCO) approach is tailored. This approach leverages predictions to make the resource allocation and workload distribution decisions. The algorithm aims to optimize the long-term system cost while satisfying the Quality of Service (QoS) constraints, particularly in terms of minimizing delays. The proposed model encompasses various costs, including the costs for edge and cloud computing resources, communication resources, delay violations, and slice reconfiguration. Through simulations, we demonstrate the efficacy of the algorithm in reducing the costs associated with network slicing.
Speaker
Speaker biography is not available.

Interference Graph Based Spectrum Allocation for Blockchain-enabled CBRS System

Jingwen Pan, Wei Wang, Kexin Chen and Xiang Shao (Nanjing University of Aeronautics and Astronautics, China); Shuo Wang (Sony (China) Limited, China); Chen Sun (Sony China Research Laboratory Beijing, China)

0
As a pioneering trial for dynamic spectrum sharing, citizens broadband radio access (CBRS) encounters difficulties in effectively managing interference and spectrum allocation among general authorized access (GAA) users. In this paper, we propose a blockchain-enabled spectrum sharing model designed to ensure interference protection for high-priority users, while simultaneously utilizing blockchain technology to enhance the quality of service (QoS) for GAA users. Additionally, we present a spectrum allocation scheme based on the interference graphs and propose an improved graph coloring algorithm for spectrum allocation. This algorithm significantly enhances spectrum utilization and minimizes user interference. Simulation results show that compared with the traditional greedy algorithm, our proposed algorithm significantly increases the number of unauthorized users that can be accommodated in the same scenario.
Speaker
Speaker biography is not available.

Orchestrating and Scheduling System for Workflows in Heterogeneous and Dynamic Environment

Wenliang Liang (China); Hao Lin (Huawei, China); Haihua Shen (Huawei Technologies Co., Ltd., China); Enbo Wang (Huawei, China)

0
Many Orchestrating and scheduling systems and algorithms have been proposed or deployed in Cloud computing and Edge computing scenarios, where computing resources are heterogeneous and dynamic due to rapid growth of mobile devices and Internet-of-Things scenarios. And intelligent services can be abstracted or deployed as computing workflows. When data privacy and service latency should be considered, how to orchestrate and schedule these intelligent services (i.e. training or inference or compound computing workflows in heterogeneous and dynamic scenarios) is becoming more important. Previous proposals are not taking these features into enough consideration. We propose Network Artificial Intelligence Management and Orchestration architecture (NAMO) and a Reinforcement Learning algorithm for this heterogeneous and dynamic scenario, which involves all cloud-edge-user computing devices, and has high dynamics in wireless channel.
Speaker
Speaker biography is not available.

Freshness-Aware Cache Update with Time-Varying Popularities in Edge Networks

Tao Chenhui and Jingjing Luo (Harbin Institute of Technology, Shenzhen, China); Fu-Chun Zheng (Harbin Institute of Technology, Shenzhen, China & University of York, United Kingdom (Great Britain)); Lin Gao (Harbin Institute of Technology, Shenzhen, China)

0
Edge caching has been well recognized as a way of relieving the burden of networks but may lead to content staleness, calling for effective cache update. In this paper, we investigate a cache update problem in edge networks which downloads time-sensitive contents from a source with timevarying content popularity. Unlike previous works assuming the popularity varies rapidly, we consider that different contents can have different popularity time scales. For contents which vary slowly over slots, many short-term update transmissions may occur during a slot. We formulate a cache update problem that optimizes the number of updates and the inter-update intervals within a slot for each content, in order to minimize the average age of information (AoI) of the requested contents. To tackle this problem efficiently, we reformulate and then decompose it into two sub-problems. Based on the theoretical results of the two sub-problems, we propose a practical update policy without the prior knowledge of content popularity. Compared with the widely known square-root law policy, simulations show that the proposed policy achieves a better performance.
Speaker
Speaker biography is not available.

Energy-Efficient Task Offloading in UAV-RIS-Assisted Mobile Edge Computing with NOMA

Mingyang Zhang (Shanghai University, China); Zhou Su (Xi'an Jiaotong University, China); Qichao Xu and Yihao Qi (Shanghai University, China); Dongfeng Fang (California Polytechnic State University San Luis Obispo, USA)

0
Mobile Edge Computing (MEC) may face coverage and channel quality limitations due to obstructe performance-degraded communication between ground users and MEC servers. In this paper, we propose a novel MEC system. In this system, a reconfigurable intelligent surface (RIS) is utilized on a deployed unmanned aerial vehicle (UAV) to facilitate communication between ground users (GUs) and the mobile edge computing (MEC) server. In addition, to further improve spectrum utilization efficiency, non-orthogonal multiple access (NOMA) technology is utilized for MEC to increase the transmission rate. Besides, the optimization of both the UAV location and RIS passive beamforming is undertaken to minimize the overall energy consumption of the system. To tackle the non-convex nature of this problem, the joint optimization problem is decomposed into two separate sub-problems to facilitate more efficient solution approaches. The complex circle manifold optimization (CCMO) algorithm and genetic algorithm (GA) are utilized to address these two sub-problems alternately. Simulation results demonstrate that the proposed scheme effectively reduces the overall energy consumption of the MEC system.
Speaker Yihao Qi
Speaker biography is not available.

Scalable Blockchain-empowered Distributed Computation Offloading: A Deep Reinforcement Learning Approach

Feng Xu, Zitong Zhao and Lei Liu (Xidian University, China); Xiaoming Yuan (Northeastern University, China); Qingqi Pei (Xidian University, China)

0
The rapid development of communication networks has motivated a range of new applications with diversified remands. These applications have brought enormous pressure on network bandwidth and response latency. Benefiting from the proximity to users, mobile edge computing is viewed a promising network paradigm to alleviate the pressure, but is still faced with privacy and security concerns. Blockchain provides a secure solution for data transmission and computation offloading in mobile edge computing. However, suffering from the characteristics of blockchain, the integration of mobile edge computing and blockchain will cause additional computational overhead and task processing latency. Therefore, in this paper, we have presented a scalable blockchain-empowered mobile edge computing architecture, where blockchain is used to provide a more credible network environment. Specially, the sharding is leveraged to improve the scalability of blockchain. Based on this architecture, a reliable computation offloading scheme is proposed to optimal the task offloading and resource allocation, and the deep reinforcement learning algorithm is employed to find the optimal solution. Finally, simulation experiments are conducted to evaluate the performance of our proposed work and the simulation results have shown the superiority of our proposed work.
Speaker
Speaker biography is not available.

Enter Zoom
Session PerAI-6G-S4

PerAI6G 2024 – Session 4: AI-Assisted 6G Communication

Conference
4:05 PM — 6:00 PM PDT
Local
May 20 Mon, 7:05 PM — 9:00 PM EDT
Location
Regency C/D

Codebook-enabled Generative End-to-end Semantic Communication Powered by Transformer

Peigen Ye (Sun Yat-Sen University, China); Yaping Sun (Pengcheng Laboratory, China); Shumin Yao and Hao Chen (Peng Cheng Laboratory, China); Xiaodong Xu (Beijing University of Posts and Telecommunications, China); Shuguang Cui (The Chinese University of Hong Kong, Shenzhen & CUHKSZ-FNii, China)

0
Codebook-based generative semantic communication attracts increasing attention, since only indices are required to be transmitted when the codebook is shared between transmitter and receiver. However, due to the fact that the semantic relations among code vectors are not necessarily related to the distance of the corresponding code indices, the performance of the codebook-enabled semantic communication system is susceptible to the channel noise. Thus, how to improve the system robustness against the noise requires careful design. This paper proposes a robust codebook-assisted image semantic communication system, where semantic codec and codebook are first jointly constructed, and then vector-to-index transformer is designed guided by the codebook to eliminate the effects of channel noise, and achieve image generation. Thanks to the assistance of the high-quality codebook to the Transformer, the generated images at the receiver outperform those of the compared methods in terms of visual perception. In the end, numerical results and generated images demonstrate the advantages of the generative semantic communication method over JPEG+LDPC and traditional joint source channel coding (JSCC) methods.
Speaker
Speaker biography is not available.

Emergency Localization for Mobile Ground Users: An Adaptive UAV Trajectory Planning Method

Zhihao Zhu (Beijing University of Posts and Telecommunications, China); Jiafan He (Nanjing Research Institute of Electronics Engineering, China); Luyang Hou, Lianming Xu, Wendi Zhu and Li Wang (Beijing University of Posts and Telecommunications, China)

0
In emergency search and rescue scenarios, the quick location of trapped people is essential. However, disasters can render the Global Positioning System (GPS) unusable. Unmanned aerial vehicles (UAVs) with localization devices can serve as mobile anchors due to their agility and high line-of-sight (LoS) probability. Nonetheless, the number of available UAVs during the initial stages of disaster relief is limited, and innovative methods are needed to quickly plan UAV trajectories to locate non-uniformly distributed dynamic targets while ensuring localization accuracy. To address this challenge, we design a single UAV localization method without hovering, use the maximum likelihood estimation (MLE) method to estimate the location of mobile users and define the upper bound of the localization error by considering users' movement. Combining this localization method and localization error-index, we utilize the enhanced particle swarm optimization (EPSO) algorithm and edge access strategy to develop a low complexity localization-oriented adaptive trajectory planning algorithm. Simulation results demonstrate that our method outperforms other baseline algorithms, enabling faster localization without compromising localization accuracy.
Speaker
Speaker biography is not available.

Semantic-Empowered Utility Loss of Information Transmission Policy in Satellite-Integrated Internet

Jianhao Huang (Harbin Institute of Technology Shenzhen, China); Jian Jiao (Harbin Institute of Technology - Shenzhen, China); Ye Wang (Peng Cheng Laboratory, China); Rongxing Lu (University of New Brunswick, Canada); Qinyu Zhang (Shenzhen Graduate School, Harbin Institute of Technology, China)

0
The recent pervasive network intelligence for 6G has further triggered extensive research into semantic communication. In response to the challenges of its effectiveness aspects, we propose a novel semantic metric, Utility loss of Information (UoI), that can quantify the impact of duration and severity of mismatch transceivers, and address the shortcomings of existing semantic metrics. Then, this paper contributes meaningful results centered around UoI by utilizing a Deep Reinforcement Learning (DRL) technology to intelligently choose when to sample, how to adopt the appropriate number of coded packets, and whether to retransmit in a network coding hybrid automatic repeat request (NC HARQ) aided satellite-integrated Internet, characterized by high bit error rate (BER), delayed feedback, and rapid channel dynamics. The approach aims to strike an optimal tradeoff between UoI and energy consumption (TOUE). More precisely, we cast the joint minimization problem of UoI and energy consumption as a partially observable Markov decision process (POMDP). Subsequently, a prioritized experience replay-aided dueling double deep Q-network (PER-D3QN) is adopted to address this problem. Simulation results validate our TOUE policy can yield significant gains over several state-of-the-art sampling and transmission polices, and demonstrate its sensitivity to changes in goal requirements.
Speaker
Speaker biography is not available.

Achieving Reliable and Intelligent Control-Assisted Transmission for Satellite Communications

Chenxi Li and Zan Li (Xidian University, China); Chuan Zhang (Beijing Institute of Technology, China); Nan Cheng, Lei Guan and Danyang Wang (Xidian University, China)

0
Satellite communication, esteemed for its capacity to offer consistent and widespread network connectivity over long distances, has emerged as a focal point of research in advancing mobile communication. In this paper, we propose a useful approach, termed Intelligent Control (IC)-assisted transmission, aimed at bolstering the reliability of wireless data transmission within satellite communication systems. This scheme orchestrates authorized communication devices to execute dependable wireless data transmission by employing a series of parameter sequences encompassing factors like frequency and time. These sequences are generated through the utilization of a block cipher algorithm, coupled with the amalgamation of spectrum sensing outcomes, facilitating intelligent iterations of the transmission process. This ensures concurrent transmission by multiple authorized devices sans interference. Notably, the generation procedure of these transmission sequences mirrors that of an encryption algorithm. Simulation results are provided to demonstrate that the proposed scheme outperforms benchmark schemes with anti-interference capabilities in intricate interference scenarios.
Speaker
Speaker biography is not available.

Adaptive Cooperation Rate-Splitting Multiple Access for Integrated Satellite-Terrestrial Network

Zhiqiang Li and Shuai Han (Harbin Institute of Technology, China); Cheng Li (Simon Fraser University, China)

0
This paper investigates the integrated satellite-terrestrial network (ISTN), which has become a trend in communication development. Since terminals share the same spectrum resources in ISTN, satellite and terrestrial base station jointly provide information service. To manage multiple access interference and provide seamless information, we propose the adaptive cooperation rate-splitting multiple access (AC-RSMA), which has flexible multiple access modes and robust interference management ability. In the AC-RSMA scheme, the satellite adaptively adjusts the level of assistance to the terrestrial base station according to actual needs. Furthermore, we formulate the joint optimization problem to improve the max-min rate. To address the non-convex optimization problem, we introduce an improved alternating optimization algorithm based on weighted minimum mean square error. Simulation results verify that the AC-RSMA scheme has obvious advantages over other baseline schemes.
Speaker
Speaker biography is not available.

An Efficient Lightweight Satellite Image Classification Model with Improved MobileNetV3

Xiaoteng Yang, Lei Liu, Xifei Song, Jie Feng and Qingqi Pei (Xidian University, China); Xiaoming Yuan (Northeastern University, China); Jianqiao Li (China Academy of Space Technology, China)

0
In view of the huge satellite images, which will consume a large amount of resources if they are completely transmitted to the ground for processing, an improved lightweight on-orbit image classification model is proposed in this study. By pushing the image processing task to the satellite, real-time processing is achieved and the ground communication burden is significantly reduced. The model is based on MobileNetV3, which utilises depth-separable convolution and inverted residual linear structure to maintain the accuracy of the model while keeping the computational efficiency. The integrated channel attention mechanism and spatial pyramid pooling structure further enhance the model's classification accuracy and multi-scale sensing capability. Experiments demonstrate that the accuracy of the model is improved by 1.09% and 1.84% on the two datasets while keeping the model parameters consistent, which validates the effectiveness and accuracy of the method in this paper. This innovative lightweight model provides an efficient and feasible solution for satellite in-orbit image classification tasks.
Speaker
Speaker biography is not available.

Enter Zoom


Gold Sponsor


Gold Sponsor


Student Travel Grants


Student Travel Grants


Student Travel Grants

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · INFOCOM 2023 · © 2024 Duetone Corp.