Workshops

The 4th International Workshop on Intelligent Cloud Computing and Networking (ICCN 2024)

Session ICCN-KS1

ICCN 2024 – Opening and Keynote Session 1: Building Resilient AI: Exploring Robustness and Heterogeneity in Federated Learning

Conference
8:50 AM — 10:00 AM PDT
Local
May 20 Mon, 11:50 AM — 1:00 PM EDT
Location
Regency E

Session Chair

Ruidong Li (Kanazawa University, Japan)

Enter Zoom
Session ICCN-S1

ICCN 2024 – Cloud and Edge Computing 1

Conference
10:15 AM — 11:15 AM PDT
Local
May 20 Mon, 1:15 PM — 2:15 PM EDT
Location
Regency E

AI-Driven Automation for Optimal Edge Cluster Network Management

Cheikh Saliou Mbacke Babou (National Institute of Information and Communications Technology (NICT), Japan); Yasunori Owada, Masugi Inoue, Kenichi Takizawa and Toshiaki Kuri (National Institute of Information and Communications Technology, Japan)

0
Unlocking the potential of edge computing demands efficient network clustering management, and this abstract introduces a groundbreaking approach—AI-driven automation for Edge Cluster network systems. This transformative solution leverages artificial intelligence to automate and optimize the intricate dynamics of edge cluster networks. By seamlessly integrating AI technologies, this solution ensures real-time adaptability regarding network disconnection, resource optimization, and heightened performance, setting the stage for a new era in edge computing. Embrace the future with a cutting-edge solution that redefines the boundaries of network management, offering unparalleled efficiency and responsiveness in the realm of edge computing.
Speaker
Speaker biography is not available.

Delay Analysis of Multi-Priority Computing Tasks in Alibaba Cluster Traces

Chenyu Gong (The Hong Kong University of Science and Technology (Guangzhou), China); Mulei Ma (Hong Kong University of Science and Technology (Guangzhou), China); Liekang Zeng (Hong Kong University of Science and Technology (Guangzhou) & Sun Yat-Sen University, China); Yang Yang (Hong Kong University of Science and Technology (Guangzhou), China); Xiaohu Ge (Huazhong University of Science & Technology, China); Liantao Wu (East China Normal University, China)

0
The evolution from 5G to 6G heralds a profound shift in network services, marked by a pivot toward tailored experiences for all users \cite{b1}. Unlike the more generalized offerings of its predecessor, the 6G era promises a landscape where customization reigns supreme, catering specifically to individual needs and preferences. This transformative leap isn't just about speed; it's about a holistic approach centered on addressing diverse user requirements like latency, energy efficiency, and computational capabilities. Latency, in particular, stands as a pivotal metric in myriad modern applications spanning healthcare, immersive VR/AR gaming, and the intricate algorithms powering autonomous driving systems. Its significance is foundational, dictating the responsiveness and real-time nature of these technologies. Yet, unraveling the intricacies of user-specific latency demands necessitates a deeper dive into data and analytics.
Speaker
Speaker biography is not available.

Power Efficient Edge-Cloud Cooperation by Value-Sensitive Bayesian Attractor Model

Tatsuya Otoshi, Hideyuki Shimonishi, Tetsuya Shimokawa and Masayuki Murata (Osaka University, Japan)

0
Edge computing has emerged as a critical paradigm in AI and IoT applications, offering reduced latency and improved efficiency over traditional cloud-based systems. However, the limited computational resources at the edge and the need for energy-efficient operations pose significant challenges. In this paper, we propose a novel approach for edge-cloud cooperation using the Value-Sensitive Bayesian Attractor Model (VSBAM). Our method focuses on optimizing resource allocation and decision-making processes in edge computing environments, taking into account the computational constraints and power consumption requirements of both edge and cloud systems. Although the optimization of resource allocation is problematic in terms of its computation time, VSBAM reduces the computation time by performing the computation in a distributed manner for each session. Also, for the convergence problem in a distributed system, VSBAM can converge quickly by finding a quasi-optimal solution in a short period of time based on user values. Through simulation-based evaluations, we demonstrate that our approach can significantly reduce power consumption while maintaining high performance, especially in scenarios involving numerous sessions. Our findings also show that our decentralized control strategy is robust against power model errors and performs comparably to centralized control methods.
Speaker
Speaker biography is not available.

DRL-Based Two-Stage SFC Deployment Approach under Latency Constraints

Aleteng Tian (Beijing Jiaotong University, China); Bohao Feng (Beijing Jiaotong Unviersity, China); Yunxue Huang and Huachun Zhou (Beijing Jiaotong University, China); Shui Yu (University of Technology Sydney, Australia); Hongke Zhang (Beijing Jiaotong University, China)

0
With the continuous emergence of latency-sensitive services, such as IoT applications and augmented reality services, the delay constraint has increasingly become a crucial factor influencing the Quality of Service (QoS) in Service Function Chaining (SFC) provision. However, ensuring the acceptance ratio of SFC requests within deadline still faces several challenges, including path planning with sufficient resources and low latency, as well as sequentially deploying Virtual Network Functions (VNFs) on paths with varying resources. In this paper, we propose a Deep Reinforcement Learning (DRL)-based Two-Stage SFC deployment algorithm (DTS-SFC) to enhance the acceptance ratio of SFCs while meeting the deadlines. Specially, a Graph-based Resource Aggregation Routing method (GRAR) is proposed in the first stage, which jointly considers the computational and bandwidth resources to obtain candidate paths within delay constraints. Then, we propose a DRL-based algorithm that takes SFC demands and path resources into account to determine the placement of each VNF. In particular, the VNF movements are utilized as actions to fix the varying dimensions of the action space and enable DRL agent to output effective decisions. Comparative analysis with several DRL-based algorithms and greedy policies demonstrates that the DTS-SFC algorithm outperforms in SFC acceptance ratio, average node utilization, and achieves lower average edge utilization.
Speaker
Speaker biography is not available.

Session Chair

Zhi Zhou (Sun Yat-sen University, China)

Enter Zoom
Session ICCN-S2

ICCN 2024 – Cloud and Edge Computing 2

Conference
11:15 AM — 12:15 PM PDT
Local
May 20 Mon, 2:15 PM — 3:15 PM EDT
Location
Regency E

Sample-efficient Learning for Edge Resource Allocation and Pricing with BNN Approximators

Feridun Tütüncüoğlu and György Dán (KTH Royal Institute of Technology, Sweden)

0
Edge computing (EC) is expected to provide low latency access to computing and storage resources to autonomous Wireless Devices (WDs).
Pricing and resource allocation in EC thus have to cope with stochastic workloads, on the one hand offering resources at a price that is attractive to WDs, one the other hand ensuring revenue to the edge operator.
In this paper, we formulate the strategic interaction between an edge operator and WDs as a Bayesian Stackelberg Markov game. We characterize the optimal strategy of the WDs that minimizes their costs. We then show that the operator's problem can be formulated as a Markov Decision Process and propose a model-based reinforcement learning approach, based on a novel approximation of the workload dynamics at the edge cell environment.
The proposed approximation leverages two Bayesian Neural Networks (BNNs) to facilitate efficient policy learning, and enables sample efficient transfer learning from simulated environments to a real edge environment. Our extensive simulation results demonstrate the superiority of our approach in terms of sample efficiency, outperforming state-of-the-art methods 30 times in terms of learning rate and by 50% in terms of operator revenue.
Speaker
Speaker biography is not available.

Joint Optimization of Charging Time and Resource Allocation in Wireless Power Transfer Assisted Federated Learning

Jingjiao Wang (China Three Gorges University, China); Huan Zhou (Northwestern Polytechnical University, China); Liang Zhao, Deng Meng and Shouzhi Xu (China Three Gorges University, China)

0
As a new distributed machine learning methodology, Federated Learning (FL) allows mobile devices (MDs) to collaboratively train a global model without sharing their raw data in a privacy-preserving manner. However, it is a great challenge to schedule each MD and allocate various resources reasonably. This paper studies the joint optimization of computing resources used by MDs for FL training, the number of local iterations as well as WPT duration of each MD in a Wireless Power Transfer (WPT) assisted FL system, with the goal of maximizing the total utility of all MDs in the entire FL training process. Furthermore, we analyze the problem by using the Karush-Kuhn-Tucker (KKT) conditions and Lagrange dual method, and propose an improved Lagrangian subgradient method to solve this problem. Finally, extensive simulation experiments are conducted under various scenarios to verify the effectiveness of the proposed algorithm. The
results show that our proposed algorithm has better performance in terms of the total utility of all MDs compared with other benchmark methods.
Speaker
Speaker biography is not available.

LLM-CloudSec: Large Language Model Empowered Automatic and Deep Vulnerability Analysis for Intelligent Clouds

Daipeng Cao and Jun Wu (Waseda University, Japan)

0
The advance of intelligent cloud applications has brought attention to potential security vulnerabilities. Vulnera- bility detection is a critical step in ensuring the security of cloud applications. However, traditional techniques for vulnerability detection, such as static and dynamic analysis, are challenging to apply in heterogeneous cloud environments. Using data-driven methods such as Machine Learning (ML) to automate vulner- ability detection in cloud applications shows promise. However, current ML solutions are limited to coarse-grained vulnerability categorization and function-level analysis. Therefore, we propose LLM-CloudSec, an unsupervised approach to fine-grained vul- nerability analysis based on the Large Language Model (LLM). LLM-CloudSec uses Retrieval Augmented Generation (RAG) and the Common Weakness Enumeration (CWE) as an external knowledge base to improve its ability to detect and analyze vul- nerabilities. We conduct experiments on the Juliet C++ test suite, and the results show that LLM-CloudSec enables CWE-based vulnerability classification and line-level vulnerability analysis. Additionally, we applied LLM-CloudSec to the D2A dataset, which was collected from real-world scenarios. We obtained 1230 data entries labelled with CWE and detailed vulnerability analysis. To foster related research, we publish our work on https://github.com/DPCa0/LLM-CloudSec.
Speaker
Speaker biography is not available.

Towards Space Intelligence: Adaptive Scheduling of Satellite-Ground Collaborative Model Inference with Space Edge Computing

Yuanming Wang (Sun Yat-Sen University, China); Kongyange Zhao, Xiaoxi Zhang and Xu Chen (Sun Yat-sen University, China)

0
In recent years, large constellations of camera-equipped nanosatellites have been deployed in Low Earth Orbit (LEO) to provide high-resolution Earth imagery. Traditional approaches of relying on building ground stations (GS) to download and process data from LEO satellites may not be scalable as the volume of data grows. Orbital edge computing has emerged as a promising solution that places computing hardware inside LEO satellites to perform near-data processing. In this paper, we study the problem of online scheduling for collaborative deep neural network (DNN) inference between satellites and ground stations within large and collaborative LEO constellations. Our proposed scheduling framework aims to jointly optimize the processing cost and inference accuracy under latency constraints by dynamically adapting the knobs of GS selection, inference configuration, and resource provisioning. However, solving this joint optimization problem optimally is non-trivial due to its NP-hard complexity. To tackle this difficulty, we develop a primal-dual approximation algorithm that efficiently solves the problem in an online manner. The efficacy of our solution is verified through both theoretical analysis with a provable approximation ratio and trace-driven evaluations. Our solution achieves a performance improvement of up to 60.6%, and our proposed approximation algorithm has an approximation ratio of 1.3.
Speaker
Speaker biography is not available.

Session Chair

Jun Wu (Waseda University, Japan)

Enter Zoom
Session ICCN-KS2

ICCN 2024 – Keynote Session 2: Collaborative Secure Edge Intelligence for 6G IoT

Conference
1:30 PM — 2:30 PM PDT
Local
May 20 Mon, 4:30 PM — 5:30 PM EDT
Location
Regency E

Session Chair

Ruidong Li (Kanazawa University, Japan)

Enter Zoom
Session ICCN-S3

ICCN 2024 – Cloud and Edge Security

Conference
2:30 PM — 3:30 PM PDT
Local
May 20 Mon, 5:30 PM — 6:30 PM EDT
Location
Regency E

Blockchain Meets O-RAN: A Decentralized Zero-Trust Framework for Secure and Resilient O-RAN in 6G and beyond

Zakaria Abou El Houda (University of Montreal, Canada); Hajar Moudoud (Universite de Sherbrooke, Canada); Lyes Khoukhi (ENSICAEN, Normandie University, France)

0
O-RAN (Open Radio Access Network) is an initiative that promotes the development of open and interoperable radio access technologies. The O-RAN Alliance has undertaken specification efforts that align with O-RAN principles, incorporating the near-real-time RAN Intelligent Controller (RIC) to manage extensible applications (xApps) owned by various ORAN operators and vendors. However, this integration of untrusted third-party applications raises significant security concerns, expanding the threat surface of 6G networks. Moreover, the heterogeneity in deployment, with apps residing on various sites, poses challenges for traditional security models based on perimeter security. To overcome this issue, a Zero Trust Architecture (ZTA) becomes paramount to ensure network security. In this context, we introduce TrustORAN, a novel blockchain-based decentralized Zero-Trust Framework designed to ensure security and trustworthiness in O-RAN. TrustORAN allows for the verification and authentication of xApps by O-RAN players, to prevent unauthorized access from malicious xApps. Moreover, we introduce a dynamic decentralized-based access control framework that allows vendors to manage permissions in a fully decentralized, flexible, scalable, and secure manner. TrustORAN architecture is implemented, tested, and deployed on both private and public blockchains. The obtained results demonstrate that TrustORAN empowers 6G O-RAN networks with heightened security, resilience, and robustness, providing effective protection against evolving security threats while ensuring Trust.
Speaker
Speaker biography is not available.

Ciphertext-Only Attack on a Secure k-NN Computation on Cloud

Santosh Kumar Upadhyaya and Srinivas Vivek (International Institute of Information Technology, Bangalore, India); Shyam S M (Indian Institute of Science Bangalore, India)

0
The rise of cloud computing has spurred a trend of transferring data storage and computational tasks to the cloud. To protect confidential information such as customer data and business details, it is essential to encrypt this sensitive data before cloud storage. Implementing encryption can prevent unauthorized access, data breaches, and the resultant financial loss, reputation damage, and legal issues. Moreover, to facilitate the execution of data mining algorithms on the cloud-stored data, the encryption needs to be compatible with domain computation. The k-nearest neighbor (k-NN) computation for a specific query vector is widely used in fields like location-based services. Sanyashi et al. (ICISS 2023) proposed an encryption scheme to facilitate privacy-preserving k-NN computation on the cloud by utilizing Asymmetric Scalar- Product-Preserving Encryption (ASPE). In this work, we identify a significant vulnerability in the aforementioned encryption scheme of Sanyashi et al. Specifically, we give an efficient algorithm and also empirically demonstrate that their encryption scheme is vulnerable to the ciphertext-only attack (COA).
Speaker
Speaker biography is not available.

Deep Reinforcement Learning-based Trajectory Optimization and Resource Allocation for Secure UAV-Enabled MEC Networks

Gao Yuan (Tsinghua University, China); Yu Ding (Zhejiang University of Technology, China); Ye Wang (Lishui University, China); Weidang Lu (Zhejiang University of Technology, China); Yang Guo (Academy of Military Science of PLA, China); Ping Wang (Tsinghua University, China); Jiang Cao (Academy of Military Science of PLA, China)

0
As a prominent type of mobile edge computing (MEC) server, unmanned aerial vehicles (UAVs) can be flexibly deployed to effectively shorten transmission distance and improve the quality of information offloading. However, due to the openness and light-of-sight characteristics of air-to-ground links, the transmitting crucial information of terminal devices (TDs) is susceptible to be eavesdropped, which poses a serious threat to the UAV-enabled MEC network security. Therefore, we design a deep reinforcement learning-based trajectory optimization and resource allocation (DRTORA) scheme to improve the security calculation performance for the secure UAV-enabled MEC network. In DRTORA, the trajectory of UAV and offloading decision, time allocation of TDs are intelligently optimized to achieve the security calculation capacity maximization by deep Q-learning (DQN) with considering the constraints of time, UAV movement, minimum calculation capacity and data stability. Simulation results demonstrate the proposed DRTORA significantly enhances the security calculation performance of the network.
Speaker Gao Yuan; Yu Ding; Weidang Lu
GAO Yuan received the B.S. degree in information engineering from PLA Information Engineering University in 2008 and Ph.D degree in communication engineering from Tsinghua University in 2014. He is now an assistant professor with Academy of Military Science of the PLA and Tsinghua University. His research interests are wireless communication system, satellite communication system, network control theory and big data. Professor Gao is a member of IEEE and ACM; he is an associate editor for several international journals. He is also a guest editor of several special issues. Professor Gao also serves as guest reviewer and TPC member of several journals and international conferences, including IEEE JSAC, Trans. on Wireless Communication, Trans. on Communication, IEEE Communication Letter, ICC, WCNC, and so on, he has published more than 80 academic papers in peer-reviewed international journals and conferences.

A Pragmatical Approach to Anomaly Detection Evaluation in Edge Cloud Systems

Sotiris Skaperas (University of Macedonia & ATHENA Research and Innovation Center, Greece); Georgios Koukis and Ioanna Angeliki Kapetanidou (Democritus University of Thrace & ATHENA Research and Innovation Center, Greece); Vassilis Tsaoussidis (Democritus University of Thrace, Greece); Lefteris Mamatas (University of Macedonia, Greece)

0
Anomaly detection (AD) has been recently employed in the context of edge cloud computing, e.g., for intrusion detection and identification of performance issues. However, state-of-the-art anomaly detection procedures do not systematically consider restrictions and performance requirements inherent to the edge, such as system responsiveness and resource consumption. In this paper, we attempt to investigate the performance of change-point based detectors, i.e., a class of lightweight and accurate AD methods, in relation to the requirements of edge cloud systems. Firstly, we review the theoretical properties of two major categories of change point approaches, i.e., Bayesian and cumulative sum (CUSUM), also discussing their suitability for edge systems. Secondly, we introduce a novel experimental methodology and apply it over two distinct edge cloud test-beds to evaluate the performance of such mechanisms in real-world edge environments. Our experimental results reveal important insights and trade-offs for the applicability and the online performance of the selected change point detectors.
Speaker
Speaker biography is not available.

Enter Zoom
Session ICCN-S4

ICCN 2024 – Cloud and Edge Applications

Conference
3:45 PM — 4:45 PM PDT
Local
May 20 Mon, 6:45 PM — 7:45 PM EDT
Location
Regency E

Latency and Bandwidth Benefits of Edge Computing for Scientific Applications

Abdur Rouf and Batyr Charyyev (University of Nevada Reno, USA); Engin Arslan (University of Texas Arlington, USA)

0
Edge computing is becoming a popular approach to process data captured by end systems. It complements cloud computing by bringing compute resources close to users whereas in traditional cloud computing the resources reside in centralized data centers. Thus, it provides a unique opportunity for latency-critical and bandwidth-hungry applications. Since it is a relatively new approach compared to cloud computing, it is important to understand how edge computing will benefit different scientific/industrial applications. In this paper, we focused on AlertWildfire infrastructure which aims to detect and prevent wildfires using network of cameras deployed to fire-prone areas. Currently, Alertwildfire project transfers camera feeds to the cloud for processing and archival. In this project, we explored how integrating edge devices such as Nvidia Jetson Orin, and Jetson TX2 into existing AlertWildfire infrastructure can benefit the Alertwildfire infrastructure in terms of reducing processing latency and minimizing network bandwidth usage. Specifically, we analyzed how edge computing can improve latency of wildfire detection workflow by processing the captured images and reduce network demand locally by providing data compression capability. We show that running wildfire detection models at the edge reduces processing time by up to 70%. We also demonstrate that compressing captured frames at the edge before transmitting them to the cloud (for archival or further processing) can reduce bandwidth demand by 51% for a single camera.
Speaker
Speaker biography is not available.

Edge-assisted Super Resolution for Volumetric Video Enhancement

Jie Li (Northeast University, China); Di Xu, Zhiming Fan, Jinhua Wang and Xingwei Wang (Northeastern University, China)

0
Volumetric video allows users to watch videos freely from six degrees of freedom, providing a more immersive experience, so it is highly popular among the public. However, volumetric video data is volumes, and it is extremely challenging to achieve reliable transmission and provide users with a good viewing experience. In this paper, we design a volumetric video transmission model based on edge-end collaboration for the characteristics of volumetric video. Through this model, we can achieve the transmission of volumetric video from the server-side to the client-side. Simultaneously, in order to ensure the user's viewing experience, we propose a super-resolution algorithm SRSC based on sparse convolution to enhance video quality.In addition, considering the limited computing resources of the client, we combine mobile edge computing (MEC) to deploy the super-resolution model on the edge. Finally, we conduct an experimental evaluation, and the results show that compared with the existing methods, our proposed algorithm has a significant improvement in performance.
Speaker
Speaker biography is not available.

Double-Agent Deep Reinforcement Learning for Adaptive 360-Degree Video Streaming in Mobile Edge Computing Networks

Suzhi Bi, Haoguo Chen, Xian Li, Shuoyao Wang and Xiao-Hui Lin (Shenzhen University, China)

0
The emerging mobile edge computing (MEC) technology effectively enhances the wireless streaming performance of 360-degree videos. By predicting a user's field of view (FoV) and precaching the video contents at the user's head-mounted device (HMD), it significantly improves the user's quality of experience (QoE) towards stable and high-resolution video playback. In practice, however, the performance of edge video streaming is severely degraded by the random FoV prediction error and wireless channel fading effects. For this, we propose in this paper a novel MEC-enabled adaptive 360-degree video streaming scheme to maximize the user's QoE. In particular, we design a two-stage transmission protocol consisting of a video precaching stage that allows the edge server to send the video files to the user's HMD in advance based on a predicted FoV, and a real-time remedial transmission stage that makes up for the missing files caused by FoV prediction error. To achieve the maximum QoE, we formulate an online optimization problem that assigns the video encoding bitrates in the two stages on the fly. Then, we propose a double-agent deep reinforcement learning (DRL) framework consisting of two smart agents deciding the encoding bitrates in different time scales. Experiments based on real-world measurements show that the proposed two-stage DRL scheme can effectively mitigate FoV prediction errors and maintain stable QoE performance under various practical scenarios.
Speaker
Speaker biography is not available.

Accurate Water Gauge Detection by Image Data Augmentation using Erase-Copy-Paste

Guorong Ye and Chen Chen (Xidian University, China); Yang Zhou (Ministry of Water Resources of China, China); Hao Wang (the Xi'an Molead Technology Co. LTD., China); Lei Liu and Qingqi Pei (Xidian University, China)

0
Water level recognition is significant for flood warnings in the hydrology industry, which mainly includes water gauge detection and water level line recognition. With the video surveillance system becoming the basic facility of hydrological stations, computer vision technologies are widely used for water gauge detection. However, as a common case, the shortage of water gauge images is fatal to deep learning performance.

In this study, a data augmentation strategy named ECP is proposed to solve the shortage of water gauge images, which can extend the dataset by mixing water gauge images. We designed two experiments to verify the effectiveness of ECP, especially in practical hydrology industry projects. Numerical results show that our proposed ECP can promote average precision and average recall of representative object detection algorithms. In addition, compared with other mixed-image methods, the performance improvement of ECP is better. Our experimental results indicated that the ECP can increase the dataset diversity, and enhance the water gauge detection algorithm accuracy and generalization ability in practical industrial applications.
Speaker
Speaker biography is not available.

Session Chair

Xian Li (Shenzhen University, China)

Enter Zoom
Session ICCN-S5

ICCN 2024 – Cloud and Edge Computing 3

Conference
4:45 PM — 5:45 PM PDT
Local
May 20 Mon, 7:45 PM — 8:45 PM EDT
Location
Regency E

Hierarchical Charging and Computation Scheduling for Connected Electric Vehicles via Safe Reinforcement Learning

Liang Li (Peng Cheng Laboratory, China); Lianming Xu, Xiaohu Liu, Li Wang and Aiguo Fei (Beijing University of Posts and Telecommunications, China)

0
By utilizing the energy storage capabilities of connected electric vehicle (CEV) batteries and their computing capacities, grid-powered vehicular networks can facilitate the integration of renewable energy sources and offer energy-efficient computational service provisioning. However, the uncertain electricity/service demand and the grid's physical safety constraints hinder effective CEV resource scheduling. In this paper, we formulate a hierarchical electricity and computing resources scheduling problem for CEVs in a microgrid (MG) under power flow constraints, which is further transformed into a Constrained Markov Decision Process (CMDP). By combining safe reinforcement learning with a linear programming module, we present a reinforcement-learning-based Safe Hierarchical Charging and Computation Scheduling (SHCCS) scheme that operates at two levels: the upper level focuses on large-scale power distribution scheduling for CEV charging stations (CSs), while the lower level handles small-scale charging and computation scheduling for individual CEVs. Through extensive simulations conducted on the IEEE 33-bus test system, we demonstrate that our scheduling scheme improves power system operating revenue while ensuring grid safety.
Speaker
Speaker biography is not available.

Multi-Aspect Edge Device Association Based on Time-Series Dynamic Interaction Networks

Xiaoteng Yang, Jie Feng, Xifei Song and Feng Xu (Xidian University, China); Yuan Liu (Guangzhou University & Guangzhou, China); Qingqi Pei (Xidian University, China); Celimuge Wu (The University of Electro-Communications, Japan)

0
The exploration into the evolution of architectural structures in 6G edge computing nodes through time-series- based dynamic interactions offers a compelling investigation within complex systems. In the realm of 6G edge computing, where nodes play a pivotal role, existing approaches primar- ily emphasize the locality of nodes or clustering relationships between networks. Traditional time-series network modeling tends to fixate on local static relationships, overlooking the dynamic interactions between real network nodes. To address this limitation, we present a model based on time-series interactions, specifically crafted for 6G edge computing networks. Our model extends beyond traditional boundaries, facilitating a comparative analysis of network formation across diverse datasets, presenting a valuable methodology for conducting evolutionary studies. The model's validity is demonstrated through evaluations on two real network datasets. Notably, within the 6G edge, a discernible structure emerges when the preference for high-level nodes surpasses a critical threshold.
Speaker
Speaker biography is not available.

Self-Interested Load Announcement by Edge Servers: Overreport or Underreport?

Chen Huang (Southern University of Science and Technology, China); Zhiyuan Wang (Beihang University, China); Ming Tang (Southern University of Science and Technology, China)

0
In mobile edge computing (MEC), customers (e.g., mobile devices) can offload computational-intensive tasks to edge servers for achieving low processing latency. However, as part of the MEC business ecosystem, an edge server can be self-interested. It may misreport the expected waiting time (at edge servers) to the customers for increasing its payoff. In this work, we formulate a two-stage game to investigate the misreporting behavior. In Stage I, the edge server decides a misreporting coefficient for untruthfully announcing the expected waiting time. In Stage II, when a customer has a new task arrival, it decides whether to offload the task or not based on its patience. Overcoming the challenge of the stochastic task arrivals in Stage II, we determine the rate of the task arrivals offloaded to the edge server. Overcoming the challenge of the nonconvexity of Stage I problem, we prove the existence and uniqueness of the optimal misreporting coefficient. Meanwhile, we prove that as the customers' patience increases, the edge server's optimal misreporting coefficient increases, while its optimal payoff remains unchanged. Our experimental results with real-world datasets show that the traffic intensity threshold that separates the situations of overreporting and underreporting is around 0.5 − 0.9.
Speaker
Speaker biography is not available.

Flow Size Prediction with Short Time Gaps

Seyed Morteza Hosseini, Sogand Sadrhaghighi and Majid Ghaderi (University of Calgary, Canada)

0
Having a priori knowledge about network flow sizes is invaluable in network traffic control. Previous efforts on estimating flow sizes have focused on long flows, where each flow is identified by a large time gap in the sequence of packets. In this work, using extensive measurements, we investigate the feasibility of predicting the size of short flows, where the flow duration can be in the order of microseconds. Specifically, we deploy several popular workloads in a public cloud testbed, and collect both network and host traces for each workload. The network trace contains standard packet metadata, while the host trace contains high-level host statistics (e.g., memory usage and disk I/O) and low-level function call traces (e.g., malloc(), send()) that are captured during the execution of each workload via host instrumentation using eBPF. These traces are then used to train machine learning models for flow size prediction with varying time gaps ranging from microseconds to milliseconds. Our results indicate that: (1) It is feasible to predict short flow sizes with high accuracy, i.e., percentage error in 0-12% range, (2) the low-level traces lead to 10-20% improvement in prediction accuracy compared to using the network and high-level traces.
Speaker Seyed Morteza Hosseini



Session Chair

Shengyuan Ye (Sun Yat-Sen University, China)

Enter Zoom


Gold Sponsor


Gold Sponsor


Student Travel Grants


Student Travel Grants


Student Travel Grants

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · INFOCOM 2023 · © 2024 Duetone Corp.