Session E-7


10:00 AM — 11:30 AM EDT
May 5 Thu, 10:00 AM — 11:30 AM EDT

Energy-Efficient Trajectory Optimization for Aerial Video Surveillance under QoS Constraints

Cheng Zhan (Southwest University, China); Han Hu (Beijing Institute of Technology, China); Shiwen Mao (Auburn University, USA); Jing Wang (Renmin University of China, China)

In this paper, we propose a novel design framework for aerial video surveillance in urban areas, where a cellular-connected UAV captures and transmits videos to the cellular network that services users. Fundamental challenges arise due to the limited onboard energy and quality of service (QoS) requirements over environment-dependent air-to-ground cellular links, where UAVs are usually served by the sidelobes of base stations (BSs). We aim to minimize the energy consumption of the UAV by jointly optimizing the mission completion time and UAV trajectory as well as transmission scheduling and association, subject to QoS constraints. The problem is formulated as a mixed-integer nonlinear programming (MINLP) problem by taking into account building blockage and BS antenna patterns. We first consider the average performance for uncertain local environments, and obtain an efficient sub-optimal solution by employing graph theory and convex optimization techniques. Next, we investigate the site-specific performance for specific urban local environments. By reformulating the problem as a Markov decision process (MDP), a deep reinforcement learning (DRL) algorithm is proposed by employing a dueling deep Q-network (DQN) neural network model with only local observations of sampled rate measurements. Simulation results show that the proposed solutions achieve significant performance gains over baseline schemes.

GADGET: Online Resource Optimization for Scheduling Ring-All-Reduce Learning Jobs

Menglu Yu and Ye Tian (Iowa State University, USA); Bo Ji (Virginia Tech, USA); Chuan Wu (The University of Hong Kong, Hong Kong); Hridesh Rajan (Iowa State University, USA); Jia Liu (The Ohio State University, USA)

Fueled by advances in distributed deep learning (DDL), recent years have witnessed a rapidly growing demand for resource-intensive distributed/parallel computing to process DDL computing jobs. To resolve network communication bottleneck and load balancing issues in distributed computing, the so-called "ring-all-reduce" decentralized architecture has been increasingly adopted to remove the need for dedicated parameter servers. To date, however, there remains a lack of theoretical understanding on how to design resource optimization algorithms for efficiently scheduling ring-all-reduce DDL jobs in computing clusters. This motivates us to fill this gap by proposing a series of new resource scheduling designs for ring-all-reduce DDL jobs. Our contributions in this paper are three-fold: i) We propose a new resource scheduling analytical model for ring-all-reduce deep learning, which covers a wide range of objectives in DDL performance optimization (e.g., excessive training avoidance, energy efficiency, fairness); ii) Based on the proposed performance analytical model, we develop an efficient resource scheduling algorithm called GADGET (greedy ring-all-reduce distributed graph embedding technique), which enjoys a provable strong performance guarantee; iii) We conduct extensive trace-driven experiments to demonstrate the effectiveness of the GADGET approach and its superiority over the state of the art.

Midpoint Optimization for Segment Routing

Alexander Brundiers (Osnabrück University, Germany); Timmy Schüller (Deutsche Telekom Technik GmbH & Osnabrück University, Germany); Nils Aschenbruck (Osnabrück University, Germany)

In this paper, we discuss the concept of Midpoint Optimization (MO) for Segment Routing (SR). It is based on the idea of integrating SR policies into the Interior Gateway Protocol (IGP) to allow various demands to be steered into them. We discuss the benefits of this approach when compared to end-to-end SR and potential challenges that might arise in deployment. We further develop an LP-based optimization algorithm to assess the Traffic Engineering capabilities of MO for SR. Based on traffic and topology data from a Tier-1 Internet Service Provider as well as other, publicly available data, we show that this algorithm is able to achieve virtually optimal results with regards to the maximum link utilization, that are on par with state-of-the-art end-to-end SR approaches. However, our MO approach requires substantially less policies to do so. For some instances the achieved reduction ranges up to more than 99%.

On Designing Secure Cross-user Redundancy Elimination for WAN Optimization

Yuan Zhang, Ziwei Zhang, Minze Xu, Chen Tian and Sheng Zhong (Nanjing University, China)

Redundancy elimination (RE) systems allow network users to remove duplicate parts in their messages by introducing caches at both message senders' and receivers' sides. While RE systems have been successfully deployed for handling unencrypted traffic, making them work over encrypted links is still open. A few solutions have been proposed recently, however they either completely violate end-to-end security or focus on single-user setting. In this paper, we present a highly secure RE solution which supports cross-user redundancy eliminations on encrypted traffics. Our solution not only preserves the end-to-end security against outside adversaries, but also protects users' privacy against semi-honest RE agents. Furthermore, our solution can defend malicious users' poisoning attack, which is crucial for cross-user RE systems but has never been studied before. In cross-user RE systems, since all users inside a LAN write into a shared, global cache and use it to recover their original messages from deduplicated ones, the poisoning attack is prone to happen, and cause systematic damage to all users even when only one user is malicious and injects poisoned data into the cache. We rigorously prove our solution's security properties, and demonstrate its promising performance via testing the proof-of-concept implementation with real-world internet traffic data.

Session Chair

Zhenzhe Zheng (Shanghai Jiao Tong University)

Session E-8

Resource Management

12:00 PM — 1:30 PM EDT
May 5 Thu, 12:00 PM — 1:30 PM EDT

Energy Saving in Heterogeneous Wireless Rechargeable Sensor Networks

Riheng Jia, Jinhao Wu, Jianfeng Lu, Minglu Li, Feilong Lin and Zhonglong Zheng (Zhejiang Normal University, China)

Mobile chargers (MCs) are usually dispatched to deliver energy to sensors in wireless rechargeable sensor networks (WRSNs) due to its flexibility and easy maintenance. This paper concerns the fundamental issue of charging path DEsign with the Minimized energy cOst (DEMO), i.e., given a set of rechargeable sensors, we appropriately design the MC's charging path to minimize the energy cost which is due to the wireless charging and the MC's movement, such that the different charging demand of each sensor is satisfied. Solving DEMO is NP-hard and involves handling the tradeoff between the charging efficiency and the moving cost. To address DEMO, we first develop a computational geometry-based algorithm to deploy multiple charging positions where the MC stays to charge nearby sensors. We prove that the designed algorithm has the approximation ratio of O(lnN), where N is the number of sensors. Then we construct the charging path by calculating the shortest Hamiltonian cycle passing through all the deployed charging positions within the network. Extensive evaluations validate the effectiveness of our path design in terms of the MC's energy cost minimization.

Escala: Timely Elastic Scaling of Control Channels in Network Measurement

Hongyan Liu (Zhejiang University, China); Xiang Chen (Zhejiang University, Peking University, and Fuzhou University, China); Qun Huang (Peking University, China); Dezhang Kong (Zhejiang University, China); Sun Jinbo (Institute of Computing Technology, Chinese Academy of Sciences, China); Dong Zhang (Fuzhou University, China); Haifeng Zhou and Chunming Wu (Zhejiang University, China)

In network measurement, data plane switches measure traffic and report events (e.g., heavy hitters) to the control plane via control channels. The control plane makes decisions to process events. However, current network measurement suffers from two problems. First, when traffic bursts occur, massive events are reported in a short time so that the control channels may be overloaded due to limited bandwidth capacity. Second, only a few events are reported in normal cases, making control channels underloaded and wasting network resources. In this paper, we propose Escala to provide the elastic scaling of control channels at runtime. The key idea is to dynamically migrate event streams among control channels to regulate the loads of these channels. Escala offers two components, including an Escala monitor that detects scaling situations based on realtime network statistics, and an optimization framework that makes scaling decisions to eliminate overload and underload situations. We have implemented a prototype of Escala on Tofino-based switches. Extensive experiments show that Escala achieves timely elastic scaling while preserving high application-level accuracy.

LSAB: Enhancing Spatio-Temporal Efficiency of AoA Tracking Systems

Qingrui Pan, Zhenlin An and Qiongzheng Lin (The Hong Kong Polytechnic University, Hong Kong); Lei Yang (The Hong Kong Polytechnic University, China)

Estimating the angle-of-arrival (AoA) of an RF source by using a large-sized antenna array is a classical topic in wireless systems. However, AoA tracking systems are not yet used for Internet of Things (IoT) in real world due to their unaffordable cost. Many efforts, such as a time-sharing array, emulated array and sparse array, were recently made to cut the cost. This work introduces a log-spiral antenna belt (LSAB), a new novel sparse "planar array" that could estimate the AoA of an IoT device in 3D space by using a few antennas connected to a single timeshare channel. Unlike the conventional arrays, LSAB deploys antennas on a log-spiral-shaped belt in a non-linear manner, following the theory of minimum resolution redundancy newly discovered in this work. One physical 8×8 uniform planar array (UPA) and four logical sparse arrays, including LSAB, were prototyped to validate the theory and evaluate the performance of sparse arrays. The extensive benchmark demonstrates that the performance of LSAB was comparable to that of a UPA, with similar degree of resolution; and LSAB could provide over 40% performance improvement than existing sparse arrays.

StepConf: SLO-Aware Dynamic Resource Configuration for Serverless Function Workflows

Zhaojie Wen, Yishuo Wang and Fangming Liu (Huazhong University of Science and Technology, China)

Function-as-a-Service (FaaS) offers a fine-grained resource provision model, enabling developers to build highly elastic cloud applications. User requests are handled by a series of serverless functions step by step, which forms a function-based workflow. The developers need an optimal resource configuration strategy for all functions in the workflow to meet their service level objectives (SLOs) while saving cost. However, developing such a resource configuration strategy is nontrivial. It is mainly because cloud functions often suffer from unpredictable cold start and performance degradation, which requires an online configuration strategy to guarantee the SLOs.

In this paper, we present StepConf, a framework that automates the resource configuration for functions as the workflow runs. StepConf optimizes memory size for each function step in the workflow and takes inter-function parallelism into consideration, which is a crucial factor that influences workflow performance.
We evaluate StepConf using a video processing workflow on AWS Lambda. Compared with static strategy, the experimental results show that StepConf can further reduce the cost by 10.37% on average.

Session Chair

Zhi Sun (Tsinghua University)

Session E-9

Networks and Monitoring

2:30 PM — 4:00 PM EDT
May 5 Thu, 2:30 PM — 4:00 PM EDT

Accelerating Deep Learning classification with error-controlled approximate-key caching

Alessandro Finamore (HUAWEI France, France); Massimo Gallo (Huawei, France); James Roberts (Telecom ParisTech, France); Dario Rossi (Huawei Technologies, France)

While Deep Learning (DL) technologies are a promising tool to solve networking problems that map to classification tasks, their computational complexity is still too high with respect to real-time traffic measurements requirements. To reduce the DL inference cost, we propose a novel caching paradigm, that we named approximate-key caching, which returns approximate results for lookups of selected input based on cached DL inference results. While approximate cache hits alleviate DL inference workload and increase the system throughput, they however introduce an approximation error. As such, we couple approximate-key caching with an error-correction principled algorithm, that we named auto-refresh. We analytically model our caching system performance for classic LRU and ideal caches, we perform a trace-driven evaluation of the expected performance, and we compare the benefits of our proposed approach with the state-of-the-art similarity caching -- testifying the practical interest of our proposal.

Lightweight Trilinear Pooling based Tensor Completion for Network Traffic Monitoring

Yudian Ouyang and Kun Xie (Hunan University, China); Xin Wang (Stony Brook University, USA); Jigang Wen (Chinese Academy of Science & Institute of Computing Technology, China); Guangxing Zhang (Institute of Computing Technology Chinese Academy of Sciences, China)

Network traffic engineering and anomaly detection rely heavily on network traffic measurement. Due to the lack of infrastructure to measure all points of interest, the high measurement cost, and the unavoidable transmission loss, network monitoring systems suffer from the problem that the network traffic data are incomplete with only a subset of paths or time slots measured. Recent studies show that tensor completion can be applied to infer the missing traffic data from partial measurements. Although promising, the interaction model adopted in current tensor completion algorithms can only capture linear and simple correlations in the traffic data, which compromises the recovery performance. To solve the problem, we propose a new tensor completion scheme based on Lightweight Trilinear Pooling, which designs (1) a Trilinear Pooling, a new multi-modal fusion method to model the interaction function to capture the complex correlations, (2) a low-rank decomposition based neural network compression method to reduce the storage and computation complexity, (3) an attention enhanced LSTM to encode and incorporate the temporal patterns in the tensor completion scheme. The extensive experiments on three real-world network traffic datasets demonstrate that our scheme can significantly reduce the error in missing data recovery with fast speed using small storage.

LossLeaP: Learning to Predict for Intent-Based Networking

Alan Collet (IMDEA Networks Institute, Spain); Albert Banchs (Universidad Carlos III de Madrid, Spain); Marco Fiore (IMDEA Networks Institute, Spain)

Intent-Based Networking mandates that high-level human-understandable intents are automatically interpreted and implemented by network management entities. As a key part in this process, it is required that network orchestrators activate the correct automated decision model to meet the intent objective. In anticipatory networking tasks, this requirement maps to identifying and deploying a tailored prediction model that can produce a forecast aligned with the specific -and typically complex- network management goal expressed by the original intent. Current forecasting models for network demands or network management optimize generic, non-flexible, and manually designed objectives, hence do not fulfil the needs of anticipatory Intent-Based Networking. To close this gap, we propose LossLeaP, a novel forecasting model that can autonomously learn the relationship between the prediction and the target management objective, steering the former to minimize the latter. To this end, LossLeaP adopts an original deep learning architecture that advances current efforts in automated machine learning, towards a spontaneous design of loss functions for regression tasks. Extensive experiments in controlled environments and in practical application case studies prove that LossLeaP outperforms a wide range of benchmarks, including state-of-the-art solutions for network capacity forecasting.

Network Tomography based on Adaptive Measurements in Probabilistic Routing

Hiroki Ikeuchi (NTT Corporation, Japan); Hiroshi Saito (University of Tokyo & Mathematics and Informatics Center, Japan); Kotaro Matsuda (NTT, Japan)

We discuss Boolean network tomography in a probabilistic routing environment. Although the stochastic behavior of routing can be found in load balancing mechanisms and normal routing protocols, it has not been discussed much in network tomography so far. In probabilistic routing, because monitoring paths are not uniquely determined, a huge number of measurements are generally required to identify the network state. To overcome this difficulty, we propose a network tomography method for efficiently narrowing down the states with a limited number of measurements by iteratively updating the posterior of the states. In this method, we introduce mutual information as a measure of the effectiveness of the probabilistic monitoring path. This enables us to prioritize measurements that are critically effective in identifying the state, thus significantly reducing the number of required measurements. We show that our method has a theoretical guarantee of the approximation ratio (1 − 1/e) on the basis of submodularity analysis. Numerical evaluations show that our method can identify the network states with far fewer measurements than existing methods.

Session Chair

Hao Wang (Louisiana State University)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · © 2022 Duetone Corp.