Session 7-A

Communication in Challenging Environments

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 9 Thu, 6:00 AM — 7:30 AM PDT

MAGIC: Magnetic Resonance Coupling for Intra-body Communications

Stella Banou, Kai Li and Kaushik Chowdhury (Northeastern University, USA)

2
This paper proposes MAGIC that uses magnetic resonant (MR) coupling for intra-body communication between implants and wearables. MAGIC includes not only the hardware-software design of the coupled coils and methods of manipulating the magnetic field for relaying information, but also the ability to raise immediate emergency-related alerts. MR coupling makes the design of the transmission link robust to channel-related parameters, as the magnetic permeability of skin and muscle is close to that of air. Thus, changes in tissue moisture content and thickness does not impact the design, which is a persistent problem in other approaches for implant communications like RF, ultrasound and galvanic coupling (GC). The paper makes three main contributions: It develops the theory leading to the design of the information relaying coils in MAGIC. It proposes a systems-level design of a communication link that extends up to 50cm with a low expected BER of 10^-4. Finally, the paper includes an experimental setup demonstrating how MAGIC operates in air and muscle tissue, as well as a comparison with alternative implant communication technologies, such as classical RF and GC. Results reveal that MAGIC offers instantaneous alerts with up to 5 times lower power consumption compared to other forms of communication.

Dynamically Adaptive Cooperation Transmission among Satellite-Ground Integrated Networks

Feilong Tang (Shanghai Jiao Tong University, China)

0
It is desirable goal to fuse satellite and ground integrated networks (SGINs) to improve the resource utilization efficiency. However, existing work did not consider how to integrate them as a whole network because they lack of function-configurable network management and efficient cooperation among satellite and ground networks. In this paper, we firstly propose a SDN based network architecture that manages and schedules SGIN resources in the layered and on-demand way. Then, we formulate the dynamical cooperation transmission in SGINs as an optimization problem and prove its NP hardness. Finally, we propose a Satellite-Ground Cooperative Transmission (SGCT) algorithm based on dynamical cooperation among satellite and ground networks, which is network-aware and workload-driven. Comprehensive experiment results demonstrate that our approach outperforms related schemes in terms of network throughput, end-to-end delay, transmission quality and load balancing.

Synergetic Denial-of-Service Attacks and Defense in Underwater Named Data Networking

Yue Li and Yingjian Liu (Ocean University of China, China); Yu Wang (Temple University, USA); Zhongwen Guo, Haoyu Yin and Hao Teng (Ocean University of China, China)

1
Due to the harsh environment and energy limitation, maintaining efficient communication is crucial to the lifetime of Underwater Sensor Networks (UWSN). Named Data Networking (NDN), one of future network architectures, begins to be applied to UWSN. Although Underwater Named Data Networking (UNDN) performs well in data transmission, it still faces some security threats, such as the Denial-of-Service (DoS) attacks caused by Interest Flooding Attacks (IFAs). In this paper, we present a new type of DoS, named as Synergetic Denial-of-Service (SDoS). Attackers synergize with each other, taking turns to reply to malicious Interests as late as possible. SDoS attacks will damage the Pending Interest Table (PIT), Content Store (CS), and Forwarding Information Base (FIB) in routers with high concealment. Simulation results demonstrate that the SDoS attacks quadruple the increased network traffic compared with normal IFAs and the only currently existing IFA detection algorithm in UNDN is completely invalid to SDoS attacks. In addition, we analyze the infection problem in UNDN and propose Trident: a defense method with adaptive threshold, burst traffic judgment and attacker identification. Simulation experiments illustrate that Trident can effectively detect and resist both SDoS attacks and normal IFAs. Meanwhile, Trident can robustly undertake burst traffic and congestion.

An Energy Efficiency Multi-Level Transmission Strategy based on underwater multimodal communication in UWSNs

Zhao Zhao, Chunfeng Liu, Wenyu Qu and Tao Yu (Tianjin University, China)

0
This paper concerns the data transmission strategy based on underwater multimodal communication for marine applications in underwater wireless sensor networks (UWSNs). Underwater data required by various applications have different values of information (VoI) depending on event type and event timeliness. These data should be transmitted in different time latency according to their VoI for accommodating both application requirements and network performance. Our objective is to design a multi-level transmission strategy by using underwater multimodal communication system so that multiple paths with transmission delay and energy consumption are provided for underwater data in UWSNs. For this purpose, we first define a minimum cost flow (MCF) model for the design of transmission strategy that considers time latency, energy efficiency, and transfer load. Then a distributed multi-level transmission strategy EMTS is given based on time backoff method for large-scale UWSNs. Finally we compared transmission latency, energy efficiency and network lifetime obtained by our EMTS and the optimum solution of the MCF model, a transmission algorithm based on greedy strategy. Although the latency of EMTS is slightly higher than that of other algorithms, our average network lifetime can reach 88.7% of that of the optimum solution of the MCF model.

Session Chair

Lan Wang (University of Memphis)

Session 7-B

Network Modeling

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 9 Thu, 6:00 AM — 7:30 AM PDT

Bound-based Network Tomography for Inferring Interesting Link Metrics

Huikang Li, Yi Gao, Wei Dong and Chun Chen (Zhejiang University, China)

1
Network tomography is an attractive methodology for inferring internal network states from accumulated path measurements between pairs of monitors. Motivated by previous results that identifying all link metrics can require a large number of monitors, we focus on calculating the performance bounds of a set of interesting links, i.e., bound-based network tomography. We develop an efficient solution to obtain the tightest upper bounds and lower bounds of all interesting links in an arbitrary network with a given set of end-to-end path measurements. Based on this solution, we further propose an algorithm to place new monitors over existing ones such that the bounds of interesting links can be maximally tightened. We theoretically prove the effectiveness of the proposed algorithms. We implement the algorithms and conduct extensive experiments based on real network topologies. Compared with state-of-the-art approaches, our algorithms can achieve up to 2.2-3.1 times more reduction on the bound intervals for all interesting links and reduce the number of placed monitors significantly in various network settings.

ProTO: Proactive Topology Obfuscation Against Adversarial Network Topology Inference

Tao Hou and Zhe Qu (University of South Florida, USA); Tao Wang (New Mexico State University, USA); Zhuo Lu and Yao Liu (University of South Florida, USA)

3
The topology of a network is fundamental for building network infrastructure functionalities. In many scenarios, enterprise networks may have no desire to disclose their topology information. In this paper, we aim at preventing attacks that use adversarial, active end-to-end topology inference to obtain the topology information of a target network. To this end, we propose a Proactive Topology Obfuscation (ProTO) system that adopts a detect-then-obfuscate framework: (i) a lightweight probing behavior identification mechanism based on machine learning is designed to detect any probing behavior, and then (ii) a topology obfuscation design is developed to proactively delay all identified probe packets in a way such that the attacker will obtain a structurally accurate yet fake network topology based on the measurements of these delayed probe packets, therefore deceiving the attacker and decreasing its appetency for future inference.

SpreadSketch: Toward Invertible and Network-Wide Detection of Superspreaders

Lu Tang (The Chinese University of Hong Kong, Hong Kong); Qun Huang (Peking University, China); Patrick Pak-Ching Lee (The Chinese University of Hong Kong, Hong Kong)

0
Superspreaders (i.e., hosts with numerous distinct connections) remain severe threats to production networks. How to accurately detect superspreaders in real-time at scale remains a non-trivial yet challenging issue. We present SpreadSketch, an invertible sketch data structure for network-wide superspreader detection with the theoretical guarantees on memory space, performance, and accuracy. SpreadSketch tracks candidate superspreaders and embeds estimated fan-outs in binary hash strings inside small and static memory space, such that multiple SpreadSketch instances can be merged to provide a network-wide measurement view for recovering superspreaders and their estimated fan-outs. We present formal theoretical analysis on SpreadSketch in terms of space and time complexities as well as error bounds. Trace-driven evaluation shows that SpreadSketch achieves higher accuracy and performance over state-of-the-art sketches. Furthermore, we prototype SpreadSketch in P4 and show its feasible deployment in commodity hardware switches.

Variational Information Diffusion for Probabilistic Cascades Prediction

Fan Zhou and Xovee Xu (University of Electronic Science and Technology of China, China); Kunpeng Zhang (University of Maryland, USA); Goce Trajcevski (Iowa State University, USA); Ting Zhong (University of Electronic Science and Technology of China, China)

0
Understanding in-network information diffusion is a fundamental problem in many application domains and one of the primary challenges is to predict the size of the information cascade. Most of the existing models rely either on hypothesized point process (e.g., Poisson and Hawkes process), or simply predict the information propagation via deep neural networks. However, they fail to simultaneously capture the underlying structure of a cascade graph and the propagation of uncertainty in the diffusion, which may result in unsatisfactory prediction performance.

To address these, in this work we propose a novel probabilistic cascade prediction framework: Variational Cascade (VaCas) graph learning networks. VaCas allows a non-linear information diffusion inference and models the information diffusion process by learning the latent representation of both the structural and temporal information. It is a pattern-agnostic model leveraging variational inference to learn the node-level and cascade-level latent factors in an unsupervised manner. In addition, VaCas is capable of capturing both the cascade representation uncertainty and node infection uncertainty, while enabling hierarchical pattern learning of information diffusion. Extensive experiments conducted on real-world datasets demonstrate that VaCas significantly improves the prediction accuracy, compared to state-of-the-art approaches, while also enabling interpretability.

Session Chair

Wei Bao (The University of Sydney)

Session 7-C

Security III

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 9 Thu, 6:00 AM — 7:30 AM PDT

A Dynamic Mechanism for Security Management in Multi-Agent Networked Systems

Shiva Navabi and Ashutosh Nayyar (University of Southern California, USA)

0
We study the problem of designing a dynamic mechanism for security management in an interconnected multi-agent system with N strategic agents and one coordinator. The system is modeled as a network of N vertices. Each agent resides in one of the vertices of the network and has a privately known security state that describes its safety level at each time. The evolution of an agent's security state depends on its own state, the states of its neighbors in the network and on actions taken by a network coordinator. Each agent's utility at time instant t depends on its own state, the states of its neighbors in the network and on actions taken by a network coordinator. The objective of the coordinator is to take security actions to maximize the long-term expected social surplus. Being strategic, agents need to be incentivized to reveal their private security state information. This results in a dynamic mechanism design problem for the coordinator. We leverage the inter-temporal correlations between the agents' security states to identify sufficient conditions under which an incentive compatible expected social surplus maximizing mechanism can be constructed. We describe construction of the desired mechanism in two special cases of our formulation.

KV-Fresh: Freshness Authentication for Outsourced Multi-Version Key-Value Stores

Yidan Hu and Rui Zhang (University of Delaware, USA); Yanchao Zhang (Arizona State University, USA)

1
Data outsourcing is a promising technical paradigm to facilitate cost-effective realtime data storage, processing, and dissemination. In such a system, a data owner proactively pushes a stream of data records to a third-party cloud service provider (CSP) for storage, which in turn processes various types of queries from end users on the data owner's behalf. This paper considers outsourced multi-version key-value stores that have gained increasing popularity in recent years, where a critical security challenge is to ensure the CSP return both authentic and fresh data in response to end users' queries. Despite several recent attempts on authenticating data freshness in outsourced key-value stores, they either incur excessively high communication cost or can only offer very limited real-time guarantee. To fill this gap, this paper introduces KV-Fresh, a novel freshness authentication scheme for outsourced key-value stores that offers strong real-time guarantee. KV-Fresh is designed based on a novel data structure, Linked Key Span Merkle Hash Tree, which enables highly efficient freshness proof by embedding chaining relationship among records generated at different times. Detailed simulation studies using real datasets confirm the efficacy and efficiency of KV-Fresh.

Modeling the Impact of Network Connectivity on Consensus Security of Proof-of-Work Blockchain

Yang Xiao (Virginia Tech, USA); Ning Zhang (Washington University in St. Louis, USA); Wenjing Lou and Thomas Hou (Virginia Tech, USA)

3
Popularized by Bitcoin, proof-of-work (PoW) blockchain is one of the most widely deployed distributed consensus systems nowadays. Driven by incentives, PoW-based blockchain allows for democratized consensus making with correctness guarantee, as long as majority of the participants in the network are honest and rational. However, such elegant game theoretical security model falls apart when it is deployed on systems with potentially adversarial components and network conditions. For distributed consensus protocol used in blockchain, network connectivity plays a crucial role in the overall security of the system. A well-connected adversary with a communication advantage over honest nodes has a higher chance of winning blockchain forks and harvesting higher-than-usual mining revenue. In this paper we evaluate the impact of network connectivity on PoW blockchain consensus security via modeling analysis. Specifically, we perform the analysis on two adversarial scenarios: 1) honest-but-potentially-colluding, 2) selfish mining. For each scenario we evaluate communication capability of networked nodes from the heterogeneous network connectivity pattern and analyze its impact on consensus security of the underlying blockchain. Our analysis serves as a paradigm for future endeavors that seek to link blockchain security with network connectivity.

Scheduling DDoS Cloud Scrubbing in ISP Networks via Randomized Online Auctions

Wencong You, Lei Jiao and Jun Li (University of Oregon, USA); Ruiting Zhou (Wuhan University, China)

1
While both Internet Service Providers (ISPs) and third-party Security Service Providers (SSPs) offer Distributed Denial-of-Service (DDoS) mitigation services through cloud-based scrubbing centers, it is often beneficial for ISPs to outsource part of the traffic scrubbing to SSPs to achieve less economic cost and better network performance. To explore this potential, we design an online auction mechanism, featured by the challenge of the switching cost of using different winning bids over time. Formulating the social cost minimization as a nonconvex integer program, we firstly relax it and design an online algorithm that breaks it into a series of modified single-shot problems and solves each of them in polynomial time, without requiring knowledge of future inputs; then, we design a randomized rounding algorithm to convert the fractional decisions into integers without violating any constraints; and finally, we design the payment for each bid based on its winning probability. We rigorously prove that our mechanism achieves a parameterized constant competitive ratio for the long-term social cost, plus truthfulness and individual rationality in expectation. We also exhibit its superior practical performance via evaluations driven by real-world data traces.

Session Chair

Ruozhou Yu (North Carolina State University)

Session 7-D

Network Intelligence V

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 9 Thu, 6:00 AM — 7:30 AM PDT

Automating Cloud Deployment for Deep Learning Inference of Real-time Online Services

Yang Li (Tsinghua University, China); Zhenhua Han (University of Science and Technology of China, China); Quanlu Zhang (MSRA, China); Zhenhua Li (Tsinghua University, China); Haisheng Tan (University of Science and Technology of China, China)

2
Real-time online services using pre-trained deep neural network (DNN) models, e.g., Siri and Instagram, require low-latency and cost-efficiency for quality-of-service and commercial competitiveness. When deployed in a cloud environment, such services call for an appropriate selection of cloud configurations (i.e., specific types of VM instances), as well as a considerate device placement plan that places the operations of a DNN model to multiple computation devices like GPUs and CPUs. Currently, the deployment mainly relies on service providers' manual efforts, which is not only onerous but also far from satisfactory oftentimes (for a same service, a poor deployment can incur significantly more costs by tens of times). In this paper, we attempt to automate the cloud deployment for real-time online DNN inference with minimum costs under the constraint of acceptably low latency. This attempt is enabled by jointly leveraging the Bayesian Optimization and Deep Reinforcement Learning to adaptively unearth the (nearly) optimal cloud configuration and device placement with limited search time. We implement a prototype system of our solution based on TensorFlow and conduct extensive experiments on top of Microsoft Azure. The results show that our solution essentially outperforms the non-trivial baselines in terms of inference speed and cost-efficiency.

Geryon: Accelerating Distributed CNN Training by Network-Level Flow Scheduling

Shuai Wang, Dan Li and Jinkun Geng (Tsinghua University, China)

0
Increasingly rich data sets and complicated models make distributed machine learning more and more important. However, the cost of extensive and frequent parameter synchronizations can easily diminish the benefits of distributed training across multiple machines. In this paper, we present Geryon, a network-level flow scheduling scheme to accelerate distributed Convolutional Neural Network (CNN) training. Geryon leverages multiple flows with different priorities to transfer parameters of different urgency levels, which can naturally coordinate multiple parameter servers and prioritize the urgent parameter transfers in the entire network fabric. Geryon requires no modification in CNN models and does not affect the training accuracy. Based on the experimental results of four representative CNN models on a testbed of 8 GPU (NVIDIA K40) servers, Geryon achieves up to 95.7% scaling efficiency even with 10GbE bandwidth. In contrast, for most models, the scaling efficiency of vanilla TensorFlow is no more than 37% and that of TensorFlow with parameter partition and slicing is around 80%. In terms of training throughput, Geryon enhanced with parameter partition and slicing achieves up to 4.37x speedup, where the flow scheduling algorithm itself achieves up to 1.2x speedup over parameter partition and slicing.

Neural Tensor Completion for Accurate Network Monitoring

Kun Xie (Hunan University, USA); Huali Lu (Hunan University, China); Xin Wang (Stony Brook University, USA); Gaogang Xie (Institute of Computing Technology, Chinese Academy of Sciences, China); Yong Ding (Guilin University of Electronic Technology, China); Dongliang Xie (State University of New York at Stony Brook, USA); Jigang Wen (Chinese Academy of Science & Institute of Computing Technology, China); Dafang Zhang (Hunan University, China)

0
Monitoring the performance of a large network is very costly. Instead, a subset of paths or time intervals of the network can be measured while inferring the remaining network data by leveraging their spatio-temporal correlations. The quality of missing data recovery highly relies on the inference algorithms. Tensor completion has attracted some recent attentions with its capability of exploiting the multi-dimensional data structure for more accurate missing data inference. However, current tensor completion algorithms only model the three-order interaction of data features through the inner product, which is insufficient to capture the high-order, nonlinear correlations across different feature dimensions. In this paper, we propose a novel Neural Tensor Completion (NTC) scheme to effectively model three-order interaction among data features with the outer product and build a 3D interaction map. Based on which, we apply 3D convolution to learn features of high-order interaction from the local range to the global range. We demonstrate theoretically this will lead to good learning ability. We further conduct extensive experiments on two real-world network monitoring datasets, Abilene and WS-DREAM, to demonstrate that NTC can significantly reduce the error in missing data recovery.

Optimizing Federated Learning on Non-IID Data with Reinforcement Learning

Hao Wang and Zakhary Kaplan (University of Toronto, Canada); Di Niu (University of Alberta, Canada); Baochun Li (University of Toronto, Canada)

6
In this paper, we propose Favor, an experience-driven federated learning framework that actively selects client devices for training to deal with non-IID data. With both empirical studies and mathematical analysis, we found an implicit connection between the distribution of training data and the weights of model trained on the data. Favor is able to profile data distribution on each device using this implicit connection, without access to the raw data. In Favor, we propose a new mechanism based on reinforcement learning that learns to construct a specific subset of client devices in each communication round. Updated with aggregation of model weights generated by this subset of devices, the global model obtained using federated learning can counterbalance the bias introduced by non-IID data. With our extensive array of experiments using PyTorch, our experimental results show that communication rounds can be reduced the number of communication rounds by up to 49% on the MNIST, up to 23% on FashionMNIST and up to 42% on CIFAR-10, as compared to the Federated Averaging algorithm.

Session Chair

Ruidong Li (National Institute of Information and Communications Technology (NICT))

Session 7-E

Network Economics

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 9 Thu, 6:00 AM — 7:30 AM PDT

A Lightweight Auction Framework for Spectrum Allocation with Strong Security Guarantees

Ke Cheng (Xidian University, China); Liangmin Wang (Jiangsu University, China); Yulong Shen and Yangyang Liu (Xidian University, China); Yongzhi Wang (Park University, USA); Lele Zheng (Xidian University, Xi'an, Shaanxi, China)

1
Auction is an effective mechanism to distribute spectrum resources. Although many privacy-preserving auction schemes for spectrum allocation have been proposed, none of them is able to perform practical spectrum auctions while ensuring enough security for bidders' private information, such as geo-locations, bid values, and data access patterns. To address this problem, we propose SLISA, a lightweight auction framework which enables an efficient spectrum allocation without revealing anything but the auction outcome, i.e., the winning bidders and their clearing prices. We present contributions on two fronts. First, as a foundation of our design, we adopt a Shuffle-then-Compute strategy to build a series of secure sub-protocols based on lightweight cryptographic primitives (e.g., additive secret sharing and basic garbled circuits). Second, we improve an advanced spectrum auction mechanism to make it data-oblivious, such that data access patterns can be hidden. Meanwhile, the modified protocols adapt to our elaborate building blocks without affecting its validity and security. We formally prove the security of all protocols under a semi-honest adversary model, and demonstrate performance improvements compared with state-of-the-art works through extensive experiments.

Fair and Protected Profit Sharing for Data Trading in Pervasive Edge Computing Environments

Yaodong Huang, Yiming Zeng, Fan Ye and Yuanyuan Yang (Stony Brook University, USA)

1
Innovative edge devices (e.g., smartphones, IoT devices) are becoming much more pervasive in our daily lives. With powerful sensing and computing capabilities, users can generate massive amounts of data. A new business model has emerged where data producers can sell their data to consumers directly to make money. However, how to protect the profit of the data producer from rogue consumers that may resell without authorization remains challenging. In this paper, we propose a smart-contract based protocol to protect the profit of the data producer while allowing consumers to resell the data legitimately. The protocol ensures the revenue is shared with the data producer over authorized reselling, and detects any unauthorized reselling. We formulate a fair revenue sharing problem to maximize the profit of both the data producer and resellers. We formulate the problem into a two-stage Stackelberg game and determine a ratio to share the reselling revenue between the data producer and resellers. Extensive simulations show that with resellers, our mechanism can achieve up to 97.8% higher profit for the data producer and resellers.

Secure Balance Planning of Off-blockchain Payment Channel Networks

Peng Li and Toshiaki Miyazaki (The University of Aizu, Japan); Wanlei Zhou (University of Technology Sydney, Australia)

0
Off-blockchain payment channels can significantly improve blockchain scalability by enabling a large number of micro-payments between two blockchain nodes, without committing every single payment to the blockchain. Multiple payment channels form a payment network, so that two nodes without direct channel connection can still make payments. A critical challenge in payment network construction is to decide how many funds should be deposited into payment channels as initial balances, which seriously influences the performance of payment networks, but has been seldom studied by existing work. In this paper, we address this challenge by designing PnP, a balance planning service for payment networks. Given estimated payment demands among nodes, PnP can decide channel balances to satisfy these demands with a high probability. It does not rely on any trusted third-parties, and can provide strong protection from malicious attacks with low overhead. It obtains these benefits with two novel designs, the cryptographic sortition and the chance-constrained balance planning algorithm. Experimental results on a testbed of 30 nodes show that PnP can enable 30% more payments than other designs.

Travel with Your Mobile Data Plan: A Location-Flexible Data Service

Zhiyuan Wang (The Chinese University of Hong Kong, Hong Kong); Lin Gao (Harbin Institute of Technology (Shenzhen), China); Jianwei Huang (The Chinese University of Hong Kong, Hong Kong)

1
Mobile Network Operators (MNOs) provide wireless data services based on a tariff data plan with a month data cap. Traditionally, the data cap is only valid for domestic data consumption and users have to pay extra roaming fees for overseas data consumption. A recent location-flexible service allows the user to access the domestic data cap in overseas locations (by configuring location-flexibility with a daily fee). This paper studies the economic effect of the location-flexibility on the overseas market. The overseas market comprises users who travel overseas within the month, thus is monthly variant. Each user decides his joint flexibility configuration and data consumption (J-FCDC) every day. The user's J-FCDC problem is an on-line payoff maximization. We analyze its off-line problem (which is NP-hard) and design an on-line strategy with provable performance. Moreover, we propose a pricing policy for the location-flexible service without relying on the market statistic information. We find that the location-flexibility induces users to consume more data in low-valuation days, and the MNO benefits from stimulating users' data consumption through an appropriate pricing. Numerical results based on empirical data show that the location-flexibility improves the MNO's revenue by 18% and the users' payoffs by 12% on average.

Session Chair

Murat Yuksel (University of Central Florida)

Session 7-F

UAV II

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 9 Thu, 6:00 AM — 7:30 AM PDT

Distributed Collaborative 3D-Deployment of UAV Base Stations for On-Demand Coverage

Tatsuaki Kimura and Masaki Ogura (Osaka University, Japan)

2
Use of unmanned aerial vehicles (UAVs) as flying base stations (BSs) has been gaining significant attention because they can provide connections to ground users efficiently during temporary events (e.g., sports events) owing to their flexible 3D-mobility. However, the complicated air-to-ground channel characteristics and interference among UAVs hinder the dynamic optimization of 3D-deployment of UAVs for spatially and temporally varying users. In this paper, we propose a novel distributed 3D-deployment method for UAV-BSs in a downlink millimeter-wave network for on-demand coverage. Our method consists mainly of two parts: sensing-aided crowd density estimation part; and distributed push-sum algorithm part. Since it is unrealistic to obtain all the specific positions users, the first part estimates the user density based on partial information obtained from on-ground sensors that can detect ground users around them. With the second part, each UAV dynamically updates its 3D-position by collaborating with its neighbors so that the total coverage of users is maximized. By employing a distributed push-sum protocol framework, we also prove the convergence of our algorithm. Simulation results demonstrate that our method can improve the coverage with a limited number of sensors and is applicable to a dynamic network.

Looking Before Crossing: An Optimal Algorithm to Minimize UAV Energy by Speed Scheduling with A Practical Flight Energy Model

Feng Shan, Luo Junzhou, Runqun Xiong, Wenjia Wu and Jiashuo Li (Southeast University, China)

1
Unmanned aerial vehicles (UAVs) is being widely used in wireless communication, e.g., data collection from ground nodes (GNs), and energy is critical. Existing works combine speed scheduling with trajectory design for UAVs, which is complicated to be optimally solved and lose trace of the fundamental nature of speed scheduling. We focus on speed scheduling by considering straight line flights, having applications in monitoring power transmission lines, roads, pipes or rivers/coasts. By real-world flight tests, we disclose a speed-related flight energy consumption model, distinct from typical distance-related or duration-related models. Based on such practical energy model, we develop the 'look before cross' (LBC) algorithm: on the time-distance diagram, we construct rooms representing GNs, and the goal is to design a room crossing walking trajectory, uniquely mapping to a speed scheduling. Such trajectory is determined by looking before crossing rooms. It is proved to be optimal for the offline scenario, in which information about GNs is available before scheduling. For the online scenario, we proposed a heuristic based on LBC. Simulation shows it performs close to the optimal offline solution. Our study on the speed scheduling and practical flight energy model shed light on a new direction on UAV aided wireless communication.

SwarmControl: An Automated Distributed Control Framework for Self-Optimizing Drone Networks

Lorenzo Bertizzolo and Salvatore D'Oro (Northeastern University, USA); Ludovico Ferranti (Northeastern University, USA & Sapienza University of Rome, Italy); Leonardo Bonati and Emrecan Demirors (Northeastern University, USA); Zhangyu Guan (University at Buffalo, USA); Tommaso Melodia (Northeastern University, USA); Scott M Pudlewski (Georgia Tech Research Institute, USA)

8
Networks of Unmanned Aerial Vehicles will take a vital role in future Internet of Things and 5G networks. However, how to control UAV networks in an automated and scalable fashion in distributed, interference-prone, and potentially adversarial environments is still an open research problem.

We introduce SwarmControl, a new software-defined control framework for UAV wireless networks based on distributed optimization principles. In essence, SwarmControl provides the Network Operator (NO) with a unified centralized abstraction of the networking and flight control functionalities. High-level control directives are then automatically decomposed and converted into distributed network control actions that are executed through programmable software-radio protocol stacks. SwarmControl (i) constructs a network control problem representation of the directives of the NO; (ii) decomposes it into a set of distributed sub-problems; and (iii) automatically generates numerical solution algorithms to be executed at individual UAVs.

We present a prototype of an SDR-based, fully reconfigurable UAV network platform that implements the proposed control framework, based on which we assess the effectiveness and flexibility of SwarmControl with extensive flight experiments. Results indicate that the SwarmControl framework enables swift reconfiguration of the network control functionalities, and it can achieve an average throughput gain of \(159%\) compared to the state-of-the-art solutions.

WBF-PS: WiGig Beam Fingerprinting for UAV Positioning System in GPS-denied Environments

Pei-Yuan Hong, Chi-Yu Li, Hong-Rong Chang, YuanHao Hsueh and Kuochen Wang (National Chiao Tung University, Taiwan)

1
Unmanned aerial vehicles (UAVs) are being investigated to substitute for labor in many indoor applications, e.g., asset tracking and surveillance, where the global positioning system (GPS) is not available. Emerging autonomous UAVs are also expected to land in indoor or canopied aprons automatically. Such GPS-denied environments require alternative non-GPS positioning methods. Though there have been some vision-based solutions for UAVs, they perform poorly in the scenes with bad illumination conditions, or estimate only relative locations but not global positions. Other common indoor localization methods do not cover UAV factors, such as low power and flying behaviors. To this end, we propose a practical non-GPS positioning system for UAVs, named WPF-PS, using low-power, off-the-shelf WiGig devices. We formulate a 3-dimensional beam fingerprint by leveraging the diversity of available TX/RX beams and the link quality. To augment accuracy, we use the weighted k-nearest neighbors algorithm to overcome partial fingerprint inaccuracy, and applies the particle filtering technique into considering the UAV motion. We prototype the WBF-PS on our UAV platform, and it yields a 90th percentile positioning error of below 1m with both small and large velocity estimation errors.

Session Chair

Enrico Natalizio (University of Lorraine/Loria)

Session 7-G

SDN III

Conference
9:00 AM — 10:30 AM EDT
Local
Jul 9 Thu, 6:00 AM — 7:30 AM PDT

AudiSDN: Automated Detection of Network Policy Inconsistencies in Software-Defined Networks

Seungsoo Lee (KAIST, Korea (South)); Seungwon Woo (ETRI, Korea (South)); Jinwoo Kim (KAIST, Korea (South)); Vinod Yegneswaran and Phillip A Porras (SRI International, USA); Seungwon Shin (KAIST, Korea (South))

2
At the foundation of every network security architecture lies the premise that formulated network flow policies are reliably deployed and enforced by the network infrastructure. However, software-defined networks (SDNs) add a particular challenge to satisfying this premise, as for SDNs the flow policy implementation spans multiple applications and abstraction layers across the SDN stack. In this paper, we focus on the question of how to automatically identify cases in which the SDN stack fails to prevent policy inconsistencies from arising among these components. This question is rather essential, as when such inconsistencies arise the implications to the security and reliability of the network are devastating. We present AudiSDN, an automated fuzz-testing framework designed to formulate test cases in which policy inconsistencies can arise in OpenFlow networks, the most prevalent SDN protocol used today. We also present results from applying AudiSDN to two widely used SDN controllers, Floodlight and ONOS. In fact, our test results have led to the filing of 3 separate CVE reports. We believe that the approach presented in this paper is applicable to the breadth of OpenFlow platforms used today, and that its broader usage will help to address a serious but yet understudied pragmatic concern.

Inferring Firewall Rules by Cache Side-channel Analysis in Network Function Virtualization

Youngjoo Shin (Kwangwoon University, Korea (South)); Dongyoung Koo (Hansung University, Korea (South)); Junbeom Hur (Korea University, Korea (South))

2
Network function virtualization takes advantage of virtualization technology to achieve flexibility in network service provisioning. However, it comes at the cost of security risks caused by cache side-channel attacks on virtual machines. In this study, we investigate the security impact of these attacks on virtualized network functions. In particular, we propose a novel cache-based reconnaissance technique against virtualized Linux-based firewalls. The proposed technique has significant advantages in the perspective of attackers. First, it enhances evasiveness against intrusion detection owing to the ability of source spoofing. Second, it allows inference on a wide variety of filtering rules. During experiment in VyOS, the proposed method could infer the firewall rules with an accuracy of more than 90% by using only a few dozen packets. We also present countermeasures to mitigate cache-based attacks on virtualized network functions.

Multicast Traffic Engineering with Segment Trees in Software-Defined Networks

Chih-Hang Wang and Sheng-Hao Chiang (Academia Sinica, Taiwan); Shan-Hsiang Shen (National Taiwan University of Science and Technology, Taiwan); De-Nian Yang (Academia Sinica, Taiwan); Wen-Tsuen Chen (National Tsing Hua University, Taiwan)

1
Previous research on Segment Routing (SR) mostly focused on unicast, whereas online SDN multicast with segment trees supporting IETF dynamic group membership has not been explored. Compared with traditional unicast SR, online SDN multicast with segment trees is more challenging since finding an appropriate size, shape, and location for each segment tree is crucial to deploy it in more multicast trees. In this paper, we explore Multi-tree Multicast Segment Routing (MMSR) to jointly minimize the bandwidth consumption and forwarding rule updates over time by leveraging segment trees. We prove MMSR is NP-hard and design a competitive algorithm, STRUS to achieve the tightest bound. STRUS includes STR and STP to merge smaller overlapping subtrees into segment trees, and then tailor them to serve more multicast trees. We design Stability Indicator and Reusage Indicator to carefully construct segment trees at the backbone of multicast trees and reroutes multicast trees to span more segment trees. Simulation and implementation on real SDNs with YouTube traffic manifest that STRUS outperforms the state-of-the-art algorithms regarding the total cost and TCAM usage. Moreover, the running time of STRUS is no more than 1 second for massive networks with thousands of nodes and therefore is practical for SDN.

SDN-based Order-aware Live Migration of Virtual Machines

Dinuni Fernando, Ping Yang and Hui Lu (Binghamton University, USA)

2
Live migration is a key technique to transfer virtual machines (VMs) from one machine to another. Often multiple VMs need to be migrated in response to events such as server maintenance, load balancing, and impending failures. However, VM migration is a resource intensive operation, which pressures the CPU, memory, and network resources of the source and destination hosts as well as intermediate network links. The live migration mechanism ends up contending for finite resources with the VMs that it needs to migrate, which prolongs the total migration time and worsens the performance of applications running inside the VMs. In this paper, we propose SOLive, a new approach to reduce resource contention between the migration process and the VMs being migrated. First, by considering the nature of VM workloads, SOLive manages the migration order to significantly reduce the total migration time. Secondly, to reduce network contention between the migration process and the VMs, SOLive uses a combination of software-defined networking-based resource reservation and scatter gather-based VM migration to quickly deprovision the source host. A prototype implementation of our approach in KVM/QEMU platform shows that SOLive quickly evicts VMs from the source host with low impact on VMs' performance.

Session Chair

Jin Zhao (Fudan University)

Session Coffee-Break-4-AM

Virtual Coffee Break

Conference
10:30 AM — 11:00 AM EDT
Local
Jul 9 Thu, 7:30 AM — 8:00 AM PDT

Virtual Coffee Break

N/A

0
This talk does not have an abstract.

Session Chair

N/A

Session 8-A

Localization I

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 9 Thu, 8:00 AM — 9:30 AM PDT

Edge Assisted Mobile Semantic Visual SLAM

Jingao Xu, Hao Cao, Danyang Li and Kehong Huang (Tsinghua University, China); Chen Qian (Dalian University of Technology, China); Longfei Shangguan (Princeton University, USA); Zheng Yang (Tsinghua University, China)

1
Localization and navigation play a key role in many location-based services and have attracted numerous research efforts from both academic and industrial community. In recent years, visual SLAM has been prevailing for robots and autonomous driving cars. However, the ever-growing computation resource demanded by SLAM impedes its application to resource-constrained mobile devices. In this paper, we present the design, implementation, and evaluation of edgeSLAM, an edge assisted real-time semantic visual SLAM service running on mobile devices. edgeSLAM leverages the state-of-the-art semantic segmentation algorithm to enhance localization and mapping accuracy and speeds up the computation-intensive SLAM and semantic segmentation algorithms by computation offloading. The key innovations of edgeSLAM include an efficient computation offloading strategy, an opportunistic data sharing mechanism, and an adaptive task scheduling algorithm. We fully implement edgeSLAM on an edge server and different types of mobile devices. Extensive experiments are conducted under 3 data sets, and the results show that edgeSLAM is able to run on mobile devices at 35fps frame rate and achieves a 5cm localization accuracy, outperforming existing solutions by more than 15%. To the best of our knowledge, edgeSLAM is the first real-time semantic visual SLAM for mobile devices.

POLAR: Passive object localization with IEEE 802.11ad using phased antenna arrays

Dolores Garcia (Imdea Networks, Spain); Jesús O. Lacruz (IMDEA Networks Institute, Spain); Pablo Jimenez Mateo (IMDEA Networks, Spain); Joerg Widmer (IMDEA Networks Institute, Spain)

2
Millimeter-wave systems not only provide high data rates and low latency, but the very large bandwidth also allows for highly accurate environment sensing. Such properties are extremely useful for smart factory scenarios. At the same time, reusing existing communication links for passive object localization is significantly more challenging than radar-based approaches due to the sparsity of the millimeter-wave multi-path environment and the weakness of the reflected paths compared to the line-of-sight path.

In this paper we explore the localization accuracy that can be achieved with IEEE 802.11ad devices. We use commercial APs while for the stations we design a full-bandwidth 802.11ad compatible FPGA-based platform with phased antenna array. The stations exploit the preamble of the beam training packets of the APs to obtain CIR measurements for all antenna patterns. With this, we determine distance and angle information for the different multi-path components in the environment to passively localize a mobile object. We evaluate our system with multiple APs and a moving robot with metallic surface. Despite the strong limitations of the hardware, our system operates in real-time and achieves 30 cm mean error accuracy and sub-meter accuracy in 98% of the cases.

Towards Single Source based Acoustic Localization

Linsong Cheng, Zhao Wang, Yunting2 Zhang, Weiyi Wang, Weimin Xu and Jiliang Wang (Tsinghua University, China)

0
Acoustic based tracking has been shown promising in many applications like Virtual Reality, smart home, video gaming, etc. Its real life deployments, however, face fundamental limitations.Existing approaches generally need three sound sources, while most COTS devices (e.g., TVs) and speakers have only two sound sources.Most tracking approaches require periodical localization to bootstrap and alleviate accumulated tracking error.

We present AcouRadar, an acoustic-based localization system with single sound source. In the heart of AcouRadar we adopt a general new model which quantifies signal properties to different frequencies, distances and angles to the source. We verify the model and show that signal from a single source can provide features for localization.To address practical challenges, (1) we design an online model adaption method to address model deviation from real signal, (2) we design pulse modulated signals to alleviate the impact of environment such as multipath effect, and (3) to address signal dynamics over time, we derive relatively stable amplitude ratio between different frequencies. We implement AcouRadar on Android and evaluate its performance for different COTS speakers in different environments. The results show that AcouRadar achieves single source localization with average error less than 5 cm.

When FTM Discovered MUSIC: Accurate WiFi-based Ranging in the Presence of Multipath

Kevin Jiokeng and Gentian Jakllari (University of Toulouse, France); Alain Tchana (ENS Lyon, France); André-Luc Beylot (University of Toulouse, France)

0
The recent standardization by IEEE of Fine Time Measurement (FTM), a time-of-flight based approach for ranging has the potential to be a turning point in bridging the gap between the rich literature on indoor localization and the so-far tepid market adoption. However, experiments with the first WiFi cards supporting FTM show that while it offers meter-level ranging in clear line-of-sight settings (LOS), its accuracy can collapse in non-line-of-sight (NLOS) scenarios.

We present FUSIC, the first approach that extends FTM's LOS accuracy to NLOS settings, without requiring any changes to the standard. To accomplish this, FUSIC leverages the results from FTM and MUSIC -- both erroneous in NLOS -- into solving the double challenge of 1) detecting when FTM returns an inaccurate value and 2) correcting the errors as necessary. Experiments in 4 different physical locations reveal that a) FUSIC extends FTM's LOS ranging accuracy to NLOS settings -- hence, achieving its stated goal; b) it significantly improves FTM's capability to offer room-level indoor positioning.

Session Chair

Hongzi Zhu (Shanghai Jiao Tong University)

Session 8-B

Trusted Systems

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 9 Thu, 8:00 AM — 9:30 AM PDT

An Adaptive and Fast Convergent Approach to Differentially Private Deep Learning

Zhiying Xu and Shuyu Shi (University of Nanjing, China); Alex X. Liu (Ant Financial Services Group, China); Jun Zhao (Nanyang Technological University, Singapore); Lin Chen (Yale University, USA)

0
With the advent of the era of big data, deep learning has become a prevalent building block in a variety of machine learning or data mining tasks, such as signal processing, network modeling and traffic analysis, to name a few. The massive user data crowdsourced plays a crucial role in the success of deep learning models. However, it has been shown that user data may be inferred from trained neural models and thereby exposed to potential adversaries, which raises information security and privacy concerns. To address this issue, recent studies leverage the technique of differential privacy to design private-preserving deep learning algorithms. Albeit successful at privacy protection, differential privacy degrades the performance of neural models. In this paper, we develop ADADP, an adaptive and fast convergent learning algorithm with a provable privacy guarantee. ADADP significantly reduces the privacy cost by improving the convergence speed with an adaptive learning rate and mitigates the negative effect of differential privacy upon the model accuracy by introducing adaptive noise. The performance of ADADP is evaluated on real-world datasets. Experiment results show that it outperforms state-of-the-art differentially private approaches in terms of both privacy cost and model accuracy.

Enabling Execution Assurance of Federated Learning at Untrusted Participants

XiaoLi Zhang, Fengting Li, Zeyu Zhang and Qi Li (Tsinghua University, China); Cong Wang (City University of Hong Kong, Hong Kong); Jianping Wu (Tsinghua University, China)

0
Federated learning (FL), as the privacy-preserving machine learning framework, draws growing attention in both industry and academia. It can obtain a jointly accurate model by distributing training tasks into data owners without centralized data collection. However, FL faces new security problems, as it losses direct control to training processes. Thus, one fundamental demand is to ensure whether participants execute training tasks as intended.

In this paper, we propose TrustFL, a practical scheme to build assurance of participants' training execution with high confidence. We employ Trusted Execution Environments (TEEs) to attest to the correct execution. Particularly, instead of performing all training processes inside TEE, we use TEE to randomly check a small fraction of training processes with tunable levels of assurance. All processes are executed on the co-located faster processor, e.g., GPU, for efficiency. Besides, we adopt a commitment scheme and devise a specific data selection method, so as to prevent cheating like only processing TEE-requested computation or uploading old results. We prototype TrustFL using GPU and SGX, and evaluate its performance. The results show that TrustFL can achieve one/two orders of magnitude speedups compared with purely training with SGX, while assuring the correct training with a confidence level of 99%.

EncELC: Hardening and Enriching Ethereum Light Clients with Trusted Enclaves

Chengjun Cai (City University of Hong Kong, Hong Kong); Lei Xu (City University of Hong Kong, China & Nanjing University of Science and Technology, Hong Kong); Zhou Anxin, Ruochen Wang and Cong Wang (City University of Hong Kong, Hong Kong); Qian Wang (Wuhan University, China)

0
The rapid growth of Ethereum blockchain has brought extremely heavy overhead for coin owners or developers to bootstrap and access transactions on Ethereum. To address this, light client is enabled, which only stores a small fraction of blockchain data and relies on bootstrapped full nodes for transaction retrievals. However, because the retrieval requests are outsourced, it raises several severe concerns about the integrity of returned results and the leakage of sensitive blockchain access histories, largely hindering the wider adoption of this important lightweight design. From another perspective, the continuously increasing blockchain storage also urges for more effective query functionalities for the Ethereum blockchain, so as to allow flexible and precise transaction retrievals.

In this paper, we propose EncELC, a new Ethereum light client design that enforces full-fledged protections for clients and enables rich queries over the Ethereum blockchain. EncELC leverages trusted hardware (e.g., Intel SGX) as a starting point for building efficient yet secure processing, and further crafts several crucial performance and security refinement designs to boost query efficiency and conceal leakages inside and outside SGX enclave. We implement a prototype of EncELC and test its performance in several real settings, and the results have confirmed the practicality of EncELC.

Mneme: A Mobile Distributed Ledger

Dimitris Chatzopoulos (Hong Kong University of Science and Technology, Hong Kong); Sujit Gujar (International Institute of Information Technology, Hyderabad, India); Boi Faltings (Swiss Federal Institute of Technology (EPFL), Switzerland); Pan Hui (Hong Kong University of Science and Technology & University of Helsinki, Hong Kong)

0
Advances in mobile computing have paved the way for new types of distributed applications that can be executed solely by mobile devices on device-to-device (D2D) ecosystems (e.g., crowdsensing). More sophisticated applications, like cryptocurrencies, need distributed ledgers to function. Distributed ledgers, such as blockchains and directed acyclic graphs (DAGs), employ consensus protocols to add data in the form of blocks. However such protocols are designed for resourceful devices that are interconnected via the Internet. Moreover, existing distributed ledgers are not deployable to D2D ecosystems since their storage needs are continuously increasing. In this work, we introduce Mneme, a DAG-based distributed ledger that can be maintained solely by mobile devices and operates via two consensus protocols: Proof-of-Context (PoC) and Proof-of-Equivalence (PoE). PoC employs users' context to add data on Mneme. PoE is executed periodically to summarize data and produce equivalent blocks that require less storage. We analyze the security of Mneme and justify the ability of PoC and PoE to guarantee the characteristics of distributed ledgers: persistence and liveness. Furthermore, we analyze potential attacks from malicious users and prove that the probability of a successful attack is inversely proportional to the square of the number of mobile users who maintain Mneme.

Session Chair

Kai Zeng (George Mason University)

Session 8-C

Security IV

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 9 Thu, 8:00 AM — 9:30 AM PDT

DRAMD: Detect Advanced DRAM-based Stealthy Communication Channels with Neural Networks

Zhiyuan Lv and Youjian Zhao (Tsinghua University, China); Chao Zhang (Institute for Network Sciences and Cyberspace, Tsinghua University, China); Haibin Li (Tsinghua University, China)

0
Shared resources facilitate stealthy communication channels, including side channels and covert channels, which greatly endanger the information security, even in cloud environments. As a commonly shared resource, DRAM memory also serves as a source of stealthy channels. Existing solutions rely on two common features of DRAM-based channels, i.e., high cache miss and high bank locality, to detect the existence of such channels. However, such solutions could be defeated. In this paper, we point out the weakness of existing detection solutions by demonstrating a new advanced DRAM-based channel, which utilizes the hardware Intel SGX to conceal cache miss and bank locality. Further, we propose a novel neural network based solution DRAMD to detect such advanced stealthy channels. DRAMD uses hardware performance counters to track not only cache miss events that are used by existing solutions, but also counts of branches and instructions executed, as well as branch misses. Then DRAMD utilizes neural networks to model the access patterns of different applications and therefore detects potential stealthy communication channels. Our evaluation shows that DRAMD achieves up to 99% precision with 100% recall. Furthermore, DRAMD introduces less than 5% performance overheads and negligible impacts on legacy applications.

PPGPass: Nonintrusive and Secure Mobile Two-Factor Authentication via Wearables

Yetong Cao (Beijing Institute of Technology, China); Qian Zhang (Tsinghua University, China); Fan Li and Song Yang (Beijing Institute of Technology, China); Yu Wang (Temple University, USA)

1
Mobile devices are promising to apply two-factor authentication in order to improve system security and enhance user privacy-preserving. Existing solutions usually have certain limits of requiring some form of user effort, which might seriously affect user experience and delay authentication time. In this paper, we propose PPGPass, a novel mobile two-factor authentication system, which leverages Photoplethysmography (PPG) sensors in wrist-worn wearables to extract individual characteristics of PPG signals. In order to realize both nonintrusive and secure, we design a two-stage algorithm to separate clean heartbeat signals from PPG signals contaminated by motion artifacts, which allows verifying users without intentionally staying still during the process of authentication. In addition, to deal with noncancelable issues when biometrics are compromised, we design a repeatable and non-invertible method to generate cancelable feature templates as alternative credentials, which enables to defense against man-in-the-middle attacks and replay attacks. To the best of our knowledge, PPGPass is the first nonintrusive and secure mobile two-factor authentication based on PPG sensors in wearables. We build a prototype of PPGPass and conduct the system with comprehensive experiments involving multiple participants. PPGPass can achieve an average F1 score of 95.3%, which confirms its high effectiveness, security, and usability.

ROBin: Known-Plaintext Attack Resistant Orthogonal Blinding via Channel Randomization

Yanjun Pan (University of Arizona, USA); Yao Zheng (University of Hawai'i at Mānoa, USA); Ming Li (University of Arizona, USA)

2
Orthogonal blinding based schemes for wireless physical layer security aim to achieve secure communication by injecting noise into channels orthogonal to the main channel and corrupting the eavesdropper's signal reception. These methods, albeit practical, have been proven vulnerable against multi-antenna eavesdroppers who can filter the message from the noise. The venerability is rooted in the fact that the main channel state remains stasis in spite of the noise injection. Our proposed scheme leverages a reconfigurable antenna for Alice to rapidly change the channel state during transmission and a compressive sensing based algorithm for her to predict and cancel the changing effects for Bob. As a result, the communication between Alice and Bob remains clear, whereas randomized channel state prevent Eve from launching the known-plaintext attack. We formally analyze the security of the scheme against both single and multi-antenna eavesdroppers and identify its unique anti-eavesdropping properties due to the artificially created fast changing channel. We conduct extensive simulations and real-world experiments to evaluate its performance. Empirical results show that our scheme can suppress Eve's attack success rate to the level of random guessing, even if she knows all the symbols transmitted through other antenna modes.

Setting the Yardstick: A Quantitative Metric for Effectively Measuring Tactile Internet

Joseph Verburg (Delft University of Technology, The Netherlands); Kroep Kees (TU Delft, The Netherlands); Vineet Gokhale and Venkatesha Prasad (Delft University of Technology, The Netherlands); Vijay S Rao (Cognizant Technology Solutions & Delft University of Technology, The Netherlands)

0
The next frontier in communications is the transmission of touch over the Internet -- popularly termed as Tactile Internet (TI) - containing both tactile and kinesthetic feedback. While enormous efforts have been undertaken to design TI enablers, barely any emphasis is paid to contemplate and diagnose the (impaired) performance. Existing qualitative and quantitative performance metrics -- predominantly based on AV transmissions -- serve only as coarse-grained measures of the perceptual impairment, and hence are unsuitable for isolating performance bottlenecks. In this paper, we design quantitative metrics for measuring the quality of a TI session that is agnostic to haptic coders, any sophisticated algorithms and network parameters. As we need to compare transmitted and received haptic signals, we use Dynamic Time Warping from speech recognition literature and evolve two new quantitative metrics (a) Effective Time-Offset (ETO) and (b) Effective Value-Offset (EVO) that comprehensively characterize degradation in haptic signal profile on a finer scale. We clearly outline the mathematical foundation through rigorous TI experiments by incorporating network emulator and haptic devices. We demonstrate the effectiveness of our proposed metrics through practical measurements using a haptic device and we show 40-150x lesser delay adjustments for only 4%-17% increased RMSE compared to DTW.

Session Chair

Xinwen Fu (University of Massachusetts Lowell)

Session 8-D

Video Streaming

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 9 Thu, 8:00 AM — 9:30 AM PDT

FastVA: Deep Learning Video Analytics Through Edge Processing and NPU in Mobile

Tianxiang Tan and Guohong Cao (The Pennsylvania State University, USA)

2
Many mobile applications have been developed to apply deep learning for video analytics. Although these advanced deep learning models can provide us with better results, they also suffer from the high computational overhead which means longer delay and more energy consumption when running on mobile devices. To address this issue, we propose a framework called FastVA, which supports deep learning video analytics through edge processing and Neural Processing Unit (NPU) in mobile. The major challenge is to determine when to offload the computation and when to use NPU. Based on the processing time and accuracy requirement of the mobile application, we study two problems: Max-Accuracy where the goal is to maximize the accuracy under some time constraints, and Max-Utility where the goal is to maximize the utility which is a weighted function of processing time and accuracy. We formulate them as integer programming problems and propose heuristics based solutions. We have implemented FastVA on smartphones and demonstrated its effectiveness through extensive evaluations.

Improving Quality of Experience by Adaptive Video Streaming with Super-Resolution

Yinjie Zhang (Peking University, China); Yuanxing Zhang (School of EECS, Peking University, China); Yi Wu, Yu Tao and Kaigui Bian (Peking University, China); Pan Zhou (Huazhong University of Science and Technology, China); Lingyang Song (Peking University, China); Hu Tuo (IQIYI Science & Technology Co., Ltd., China)

3
Given high-speed mobile Internet access today, audiences are expecting much higher video quality than before. Video service providers have deployed dynamic video bitrate adaptation services to fulfill such user demands. However, legacy video bitrate adaptation techniques are highly dependent on the estimation of dynamic bandwidth, and fail to integrate the video quality enhancement techniques, or consider the heterogeneous computing capabilities of client devices, leading to low quality of experience (QoE) for users. In this paper, we present a super-resolution based adaptive video streaming (SRAVS) framework, which applies a Reinforcement Learning (RL) model for integrating the video super-resolution (VSR) technique with the video streaming strategy. The VSR technique allows clients to download low bitrate video segments, reconstruct and enhance them to high-quality video segments while making the system less dependent on estimating dynamic bandwidth. The RL model investigates both the playback statistics and the distinguishing features related to the client-side computing capabilities. Trace-driven emulations over real-world videos and bandwidth traces verify that SRAVS can significantly improve the QoE for users compared to the state-of-the-art video streaming strategies with or without involving VSR techniques.

Stick: A Harmonious Fusion of Buffer-based and Learning-based Approach for Adaptive Streaming

Tianchi Huang (Tsinghua University, China); Chao Zhou (Beijing Kuaishou Technology Co., Ltd, China); Rui-Xiao Zhang, Chenglei Wu, Xin Yao and Lifeng Sun (Tsinghua University, China)

3
Existing off-the-shelf buffer-based approaches leverage a simple yet effective buffer-bound to control the adaptive bitrate~(ABR) streaming system. Nevertheless, such approaches with standard parameters fail to provide high quality of experience~(QoE) video streaming under all considered network conditions. Meanwhile, state-of-the-art learning-based ABR approach Pensieve outperforms existing schemes but is impractical to deploy. Therefore, how to harmoniously fuse the buffer-based and learning-based approach has become a key challenge for further enhancing ABR methods. In this paper, we propose \emph{Stick}, an ABR algorithm that fuses the deep learning method and traditional buffer-based method. Stick utilizes deep reinforcement learning~(DRL) method to train the neural network, which outputs the \emph{buffer-bound} to control the buffer-based approach for maximizing the QoE metric with different parameters. Trace-driven emulation illustrates that Stick betters Pensieve by 9.41% with an overhead reduction of 88%. Moreover, aiming to further reduce the computational costs while preserving the performances, we propose Trigger, a light-weighted neural network that \emph{determines} whether the buffer-bound should be adjusted. Experimental results show that Stick+Trigger rivals or outperforms existing schemes in average QoE by 1.7%-28%, and significantly reduces the Stick's computational overhead by 24%-61%. Extensive results on real-world evaluation also demonstrate the superiority of Stick over existing state-of-the-art approaches.

Streaming 360◦ Videos using Super-resolution

Mallesham Dasari (Stony Brook University, USA); Arani Bhattacharya (KTH Royal Institute of Technology, Sweden); Santiago Vargas, Pranjal Sahu, Aruna Balasubramanian and Samir R. Das (Stony Brook University, USA)

3
360 videos provide an immersive experience to users, but require considerably more bandwidth to stream compared to regular videos. State-of-the-art 360◦ video streaming systems use viewport prediction to reduce bandwidth requirement, that involves predicting which part of the video the user will view and only fetching that content. However, viewport prediction is error prone resulting in poor user QoE. We design PARSEC, a 360 video streaming system that reduces bandwidth requirement while improving video quality. PARSEC trades off bandwidth for more client compute to achieve its goals. PARSEC uses a compression technique based on super resolution, where the video is significantly compressed at the server and the client runs a deep learning model to enhance the video to a much higher quality. PARSEC addresses a set of challenges associated with using super resolution for 360 video streaming: large deep learning models, high inference latency, and variance in the quality of the enhanced videos. To this end, PARSEC trains small micro-models over shorter video segments, and then combines traditional video encoding with super resolution techniques to overcome the challenges. We evaluate PARSEC on a real WiFi network, over a broadband network trace released by FCC, and over a 4G/LTE network trace.

Session Chair

Zhenhua Li (Tsinghua University)

Session 8-E

Load Balancing

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 9 Thu, 8:00 AM — 9:30 AM PDT

Classification of Load Balancing in the Internet

Rafael Almeida and Italo Cunha (Universidade Federal de Minas Gerais, Brazil); Renata Teixeira (Inria, France); Darryl Veitch (University of Technology Sydney, Australia); Christophe Diot (Google, USA)

0
Recent advances in programmable data planes, software-defined networking, and even the adoption of IPv6, support novel, more complex load balancing strategies. We introduce the Multipath Classification Algorithm (MCA), which extends existing formalism and techniques to consider that load balancers may use arbitrary combinations of bits in the packet header for load balancing. We propose optimizations to reduce probing cost that are applicable to MCA and existing load balancing measurement techniques. Through large-scale measurement campaigns, we characterize and study the evolution of load balancing on the IPv4 and IPv6 Internet performing measurement campaigns with multiple transport protocols. Our results show that load balancing is more prevalent and that load balancing strategies are more mature than previous characterizations have found.

Offloading Dependent Tasks in Mobile Edge Computing with Service Caching

Gongming Zhao and Hongli Xu (University of Science and Technology of China, China); Yangming Zhao and Chunming Qiao (University at Buffalo, USA); Liusheng Huang (University of Science and Technology of China, China)

1
In Mobile Edge Computing (MEC) applications, many tasks require specific service support for execution and in addition, have a dependent order of execution among the tasks. However, previous works often ignore the impact of having limited services cached at the edge nodes on (dependent) task offloading, thus may lead to an infeasible offloading decision or a longer completion time. To bridge the gap, this paper studies how to efficiently offload dependent tasks to edge nodes with limited (and predetermined) service caching. We denote such a problem whose objective is to minimize the makespan by ODT-MM, and prove that there exists no constant approximation algorithm for this hard problem. Then, we design an efficient convex programming based algorithm (CP) to solve this problem. Moreover, we study a special case with a homogeneous MEC and propose a favorite successor based algorithm (FS) to solve this special case with a competitive ratio of O(1). Extensive simulation results using Google data traces show that our proposed algorithms can significantly reduce applications' completion time by about 27-51% compared with other alternatives.

One More Config is Enough: Saving (DC)TCP for High-speed Extremely Shallow-buffered Datacenters

Wei Bai (Microsoft Research Asia, China); Shuihai Hu (The Hong Kong University of Science and Technology, China); Kai Chen (Hong Kong University of Science and Technology, China); Kun Tan (Huawei, China); Yongqiang Xiong (Microsoft Research Asia, China)

1
The link speed in production datacenters is growing fast, from 1Gbps to 40Gbps or even 100Gbps. However, the buffer size of commodity switches increases slowly, e.g., from 4MB at 1Gbps to 16MB at 100Gbps, thus significantly outpaced by the link speed. In such extremely shallow-buffered networks, today's TCP/ECN solutions, such as DCTCP, suffer from either excessive packet loss or substantial throughput degradation.

To this end, we present BCC, a simple yet effective solution that requires just one more ECN config over prior solutions. BCC operates based on real-time global buffer utilization. When available buffer suffices, BCC delivers both high throughput and low packet loss rate as prior work; Once it gets insufficient, BCC automatically triggers the shared buffer ECN to prevent packet loss at the cost of sacrificing little throughput. BCC is readily deployable with commodity switches. We validate BCC's feasibility in a small 100G testbed and evaluate its performance using large-scale simulations. Our results show that BCC maintains low packet loss rate while slightly degrading throughput when the buffer becomes insufficient. For example, compared to current practice, BCC achieves up to 94.4% lower 99th percentile FCT for small flows while degrading FCT for large flows by up to 3%.

TINA: A Fair Inter-datacenter Transmission Mechanism with Deadline Guarantee

Xiaodong Dong (Tianjin University, China); Wenxin Li (Hong Kong University of Science & Technology, Hong Kong); Xiaobo Zhou and Keqiu Li (Tianjin University, China); Heng Qi (Dalian University of Technology, China)

0
Geographically distributed cloud is a promising technique to achieve high performance for service providers. For inter-datacenter transfers, deadline guarantee and fairness are the two most important requirements. On the one hand, to ensure more transfers finish before their deadlines, preemptive scheduling policies are widely used, leading to the transfer starvation problem and is hence unfair. On the other hand, to ensure fairness, inter-datacenter bandwidth is fairly shared among transfers with per-flow bandwidth allocation, which leads to deadline missing problem. A mechanism that achieves these two seemingly conflicting objectives simultaneously is still missing. In this paper, we propose TINA to schedule network transfers fairly while providing deadline guarantee. TINA allows each transfer to compete freely with each other for bandwidth. More specifically, each transfer is assigned a probability to indicate whether to transmit or not. We formulate the competition among the transfers as an El Farol game while keeping the traffic load under a threshold to avoid congestion. We then prove that the Nash Equilibrium is the optimal strategy and propose a light-weight algorithm to derive it. Finally, both simulations and testbed experiments results show that TINA achieves superior performance than state-of-art methods in terms of fairness and deadline guarantee rate.

Session Chair

Mingkui Wei (Sam Houston State University)

Session 8-F

Wireless Charging

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 9 Thu, 8:00 AM — 9:30 AM PDT

An Effective Multi-node Charging Scheme for Wireless Rechargeable Sensor Networks

Tang Liu (Sichuan Normal University, China); BaiJun Wu (University of Louisiana at Lafayette, USA); Shihao Zhang, Jian Peng and Wenzheng Xu (Sichuan University, China)

0
With the maturation of wireless charging technology, Wireless Rechargeable Sensor Networks (WRSNs) has become a promising solution for prolong network lifetimes. Recently studies propose to employ a mobile charger (MC) to simultaneously charge multiple sensors within the same charging range, such that the charging performance can be improved. In this paper, we aim to jointly optimize the number of dead sensors and the energy usage effectiveness in such multi-node charging scenarios. We achieve this by introducing the partial charging mechanism, meaning that instead of following the conventional way that each sensor gets fully charged in one time step, our work allows MC to fully charge a sensor by multiple times. We show that the partial charging mechanism causes minimizing the number of dead sensors and maximizing the energy usage effectiveness to conflict with each other. We formulate this problem and develop a multi-node temporal spatial partial-charging algorithm (MTSPC) to solve it. The optimality of MTSPC is proved, and extensive simulations are carried out to demonstrate the effectiveness of MTSPC.

Energy Harvesting Long-Range Marine Communication

Ali Hosseini-Fahraji, Pedram Loghmannia, Kexiong (Curtis) Zeng and Xiaofan Li (Virginia Tech, USA); Sihan Yu (Clemson University, USA); Sihao Sun, Dong Wang, Yaling Yang, Majid Manteghi and Lei Zuo (Virginia Tech, USA)

0
This paper proposes a self-sustaining broadband long-range maritime communication as an alternative to the expensive and slow satellite communications in offshore areas. The proposed system, named Marinet, consists of many buoys. Each of the buoys has two units: an energy harvesting unit and a wireless communication unit. The energy harvesting unit extracts energy from ocean waves to support the operation of the wireless communication unit. The wireless communication unit at each buoy operates in a TV white space frequency band and connects to each other and wired high-speed gateways on land or islands to form a mesh network. The resulting mesh network provides wireless access services to marine users in their range. A prototype of the energy harvesting unit and the wireless communication unit are built and tested in the field. In addition, to ensure Marinet will maintain stable communications in rough sea states, an ocean-link-state prediction algorithm is designed. The algorithm predicts ocean link-states based on ocean wave movements. A realistic ocean simulator is designed and used to evaluate how such a link-state prediction algorithm can improve routing algorithm performance.

Maximizing Charging Utility with Obstacles through Fresnel Diffraction Model

Chi Lin and Feng Gao (Dalian University of Technology, China); Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China); Jiankang Ren, Lei Wang and Guowei WU (Dalian University of Technology, China)

0
Benefitting from the recent breakthrough of wireless power transfer technology, Wireless Rechargeable Sensor Networks (WRSNs) have become an important research topic. Most prior arts focus on system performance enhancement in an ideal environment that ignores impacts of obstacles. This contradicts with practical applications in which obstacles can be found almost anywhere and have dramatic impacts on energy transmission. In this paper, we concentrate on the problem of charging a practical WRSN in the presence of obstacles to maximize the charging utility under specific energy constraints. First, we propose a new theoretical charging model with obstacles based on Fresnel diffraction model, and conduct experiments to verify its effectiveness. Then, we propose a spatial discretization scheme to obtain a finite feasible charging position set for MC, which largely reduces computation overhead. Afterwards, we re-formalize charging utility maximization with energy constraints as a submodular function maximization problem and propose a cost-efficient algorithm with approximation ratio \(\frac{(e-1)}{2e}(1-\varepsilon)\) to solve it. Lastly, we demonstrate that our scheme outperforms other algorithms by at least \(14.8%\) in terms of charging utility through test-bed experiments and extensive simulations.

Placing Wireless Chargers with Limited Mobility

Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China); Chaofeng Wu, Xiaoyu Wang and Wanchun Dou (Nanjing University, China); Yunhuai Liu (Peking University, China)

0
This paper studies the problem of Placing directional wIreless chargers with Limited mObiliTy (PILOT), that is, given a budget of mobile directional wireless chargers and a set of static rechargeable devices on a 2D plane, determine deployment positions, stop positions and orientations, and portions of time for all chargers such that overall charging utility of all devices can be maximized. To the best of our knowledge, we are the first to study placement of mobile chargers. To address PILOT, we propose a (1/2−ε)-approximation algorithm. First, we present a method to approximate nonlinear charging power of chargers, and further propose an approach to construct Maximal Covered Set uniform subareas to reduce the infinite continuous search space for stop positions and orientations to a finite discrete one. Second, we present geometrical techniques to further reduce the infinite solution space for candidate deployment positions to a finite one without performance loss, and transform PILOT to a mixed integer nonlinear programming problem. Finally, we propose a linear programming based greedy algorithm to address it. Simulation and experimental results show that our algorithm outperforms five comparison algorithms by 23.11% ∼ 281.10%.

Session Chair

Cong Wang (Old Dominion University)

Session 8-G

Edge Computing II

Conference
11:00 AM — 12:30 PM EDT
Local
Jul 9 Thu, 8:00 AM — 9:30 AM PDT

Collaborate or Separate? Distributed Service Caching in Mobile Edge Clouds

Zichuan Xu and Lizhen Zhou (Dalian University of Technology, China); Sid Chi-Kin Chau (Australian National University, Australia); Weifa Liang (The Australian National University, Australia); Qiufen Xia (Dalian University of Technology, China); Pan Zhou (Huazhong University of Science and Technology, China)

1
With the development of 5G technology, mobile edge computing is emerging as an enabling technique to promote Quality of Service (QoS) of network services. Service providers are caching services to the edge cloud. In this paper, we study the problem of service caching in mobile edge network under a mobile service market with multiple network service providers completing for both computation and bandwidth resources of the edge cloud. For the problem without resource sharing among network service providers, we propose an Integer Linear Program (ILP) and a randomized approximation algorithm via randomized rounding. For the problem with resource sharing, we devise a distributed and stable game-theoretical mechanism with the aim to minimize the social cost of all service providers, by introducing a novel cost sharing model and a coalition formation game. We analyze the performance of the mechanism by showing a good guaranteed gap between the solution obtained and the social optimum, i.e., Strong Price of Anarchy (SPoA).

Cooperative Service Caching and Workload Scheduling in Mobile Edge Computing

Xiao Ma (Beijing University of Posts and Telecommunications, China); Ao Zhou (Beijing University of Posts & Telecommunications, China); Shan Zhang (Beihang University, China); Shangguang Wang (Beijing University of Posts and Telecommunications, China)

1
Mobile edge computing is beneficial to reduce service response time and core network traffic by pushing cloud functionalities to network edge. Equipped with storage and computation capacities, edge nodes can cache services of resource-intensive and delay-sensitive mobile applications and process the corresponding computation tasks without outsourcing to central clouds. However, the heterogeneity of edge resource capacities and inconsistence of edge storage and computation capacities make it difficult to jointly fully utilize the storage and computation capacities when there is no cooperation among edge nodes. To address this issue, we consider cooperation among edge nodes and investigate cooperative service caching and workload scheduling in mobile edge computing. This problem can be formulated as a mixed integer nonlinear programming problem, which has non-polynomial computation complexity. To overcome the challenges of subproblem coupling, computation-communication tradeoff, and edge node heterogeneity, we develop an iterative algorithm called ICE. This algorithm is designed based on Gibbs sampling, which has provably near-optimal results, and the idea of water filling, which has polynomial computation complexity. Simulations are conducted and the results demonstrate that our algorithm can jointly reduce the service response time and the outsourcing traffic compared with the benchmark algorithms.

Joint Optimization of Signal Design and Resource Allocation in Wireless D2D Edge Computing

Junghoon Kim (Purdue University, USA); Taejoon Kim and Morteza Hashemi (University of Kansas, USA); Christopher G. Brinton (Purdue University & Zoomi Inc., USA); David Love (Purdue University, USA)

3
In this paper, we study the distributed computational capabilities of device-to-device (D2D) networks. A key characteristic of D2D networks is that their topologies are reconfigurable to cope with network demands. For distributed computing, resource management is challenging due to limited network and communication resources, leading to inter-channel interference. To overcome this, recent research has addressed the problems of network scheduling, subchannel allocation, and power allocation, but has not considered them jointly. In this paper, unlike previous mobile edge computing (MEC) approaches, we propose a joint optimization of wireless signal design and network resource allocation to maximizing energy efficiency. Given that the resulting problem is a non-convex mixed integer program (MIP) which is infeasible to solve at scale, we decompose its solution into two parts: (i) a resource allocation subproblem, which optimizes the link selection and subchannel allocations, and (ii) signal design subproblem, which optimizes the transmit beamformer, transmit power, and receive combiner. Simulation results on wireless edge topologies show that our method yields substantial improvements in energy efficiency compared with cases of no offloading and partially optimized methods, and that the efficiency scales well with the size of the network.

INFOCOM 2020 Best Paper: Reducing the Service Function Chain Backup Cost over the Edge and Cloud by a Self-adapting Scheme

Xiaojun Shang, Yaodong Huang, Zhenhua Liu and Yuanyuan Yang (Stony Brook University, USA)

7
The fast development of virtual network functions (VNFs) brings new opportunities to network service deployment on edge networks. For complicated services, VNFs can chain up to form service function chains (SFCs). Despite the promises, it is still not clear how to backup VNFs to minimize the cost while meeting the SFC availability requirements in an online manner. In this paper, we propose a novel self-adapting scheme named SAB to efficiently backup VNFs over both the edge and the cloud. Specifically, SAB uses both static backups and dynamic ones created on the fly to accommodate the resource limitation of edge networks. For each VNF backup, SAB determines whether to place it on the edge or the cloud, and if on the edge, which edge server to use for load balancing. SAB does not assume failure rates of VNFs but instead strives to find the sweet point between the desired availability of SFCs and the backup cost. Both theoretical performance bounds and extensive simulation results highlight that SAB provides significantly higher availability with lower backup costs compared with existing baselines.

Session Chair

Jiangchuan Liu (Simon Fraser University)

Session Lunch-Break-4

Virtual Lunch Break

Conference
12:30 PM — 2:00 PM EDT
Local
Jul 9 Thu, 9:30 AM — 11:00 AM PDT

Virtual Lunch Break

N/A

0
This talk does not have an abstract.

Session Chair

N/A

Session 9-A

IoT II

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 9 Thu, 11:00 AM — 12:30 PM PDT

An Adaptive Robustness Evolution Algorithm with Self-Competition for Scale-free Internet of Things

Tie Qiu (Tianjin University, China); Zilong Lu (Dalian University of Technology, China); Keqiu Li (Tianjin University, China); Guoliang Xue (Arizona State University, USA); Dapeng Oliver Wu (University of Florida, USA)

1
Internet of Things (IoT) includes numerous sensing nodes that constitute a large scale-free network. Optimizing the network topology for increased resistance against malicious attacks is an NP-hard problem. Heuristic algorithms can effectively handle such problems, particularly genetic algorithms. However, conventional genetic algorithms are prone to falling into premature convergence owing to the lack of global search ability caused by the loss of population diversity during evolution. Although this can be alleviated by increasing population size, additional computational overhead will be incurred. Moreover, after crossover and mutation operations, individual changes in the population are mixed, and loss of optimal individuals may occur, which will slow down the evolution of the population. Therefore, we combine the population state with the evolutionary process and propose an adaptive robustness evolution algorithm (AREA) with self-competition for scale-free IoT topologies. In AREA, the crossover and mutation operations are dynamically adjusted according to population diversity index to ensure global search ability. Moreover, a self-competition operation is used to ensure convergence. The simulation results demonstrate that AREA is more effective in improving the robustness of scale-free IoT networks than several existing methods.

Bandwidth Part and Service Differentiation in Wireless Networks

Francois Baccelli (UT Austin & The University of Texas at Austin, USA); Sanket Sanjay Kalamkar (INRIA Paris, France)

1
This paper presents a stochastic geometry-based model for bandwidth part (BWP) in device-to-device wireless networks. BWP allows one to adapt the bandwidth allocated to users depending on their data rate needs. Specifically, in BWP, a wide bandwidth is divided into chunks of smaller bandwidths and the number of bandwidth chunks allocated to a user depends on its needs or type. The BWP model studied here is probabilistic in that the user locations are assumed to form a realization of a Poisson point process and each user decides independently to be of a certain type with some probability. This model allows one to quantify spectrum sharing and service differentiation in this context, namely to predict what performance a user gets depending on its type and the overall performance. This is based on exact representations of key performance metrics for each user type, namely its success probability, the meta distribution of its signal-to-interference ratio, and its Shannon throughput. We also show that, surprisingly, the higher traffic variability stemming from BWP is beneficial: when comparing two networks using BWP and having the same mean signal and the same mean interference powers, the network with higher traffic variability outperforms for all these performance metrics.

Low-Overhead Joint Beam-Selection and Random-Access Schemes for Massive Internet-of-Things with Non-Uniform Channel and Load

Yihan Zou, Kwang Taik Kim, Xiaojun Lin and Mung Chiang (Purdue University, USA); Zhi Ding (University of California at Davis, USA); Risto Wichman (Aalto University School of Electrical Engineering, Finland); Jyri Hämäläinen (Aalto University, Finland)

2
In this paper, we study low-overhead uplink multi-access algorithms for massive Internet-of-Things (IoT) that can exploit the MIMO performance gain. Although MIMO improves system capacity, it usually requires high overhead due to Channel State Information (CSI) feedback, which is unsuitable for IoT. Recently, a Pseudo-Random Beam-Forming (PRBF) scheme was proposed to exploit the MIMO performance gain for uplink IoT access with uniform channel and load, without collecting CSI at BS. For non-uniform channel and load, new adaptive beam-selection and random-access algorithms are needed to efficiently utilize the system capacity with low overhead. While most existing algorithms for a related multi-channel scheduling problem require each node to at least know some information of the queue length of all contending nodes, we propose a new Low-overhead Multi-Channel Joint Channel-Assignment and Random-Access (L-MC-JCARA) algorithm that reduces the overhead to be independent of the number of interfering nodes. A key novelty is to let the BS estimate the total backlog in each contention group by only observing the random-access events, so that no queue-length feedback is needed from IoT devices. We prove that L-MC-JCARA can achieve at least 0.24 of the capacity region of the optimal centralized scheduler for the corresponding multi-channel system.

Online Control of Preamble Groups with Priority in Cellular IoT Networks

Jie Liu (Hanyang University, Korea (South)); Mamta Agiwal (SejongUniversity, Korea (South)); Miao Qu and Hu Jin (Hanyang University, Korea (South))

2
Internet of Things (IoT) is the ongoing paradigm that offers a truly connected society by integrating several heterogeneous service and applications. The major transformation lies in the fact that the use cases of connected devices would not only become at par with people oriented connections but would ultimately exceed it and by volumes. Moreover, due to diversity in applications and requirements, the connected devices would manifest different priorities. With the variety of requirements, in terms of latency, payload size, number of connections, update frequency, reliability, excreta, the Random access process (RAP) would also require modification in cellular IoT. RAP is the first step to establish connection between devices and the base station. In order to prioritize the IoT devices in RAP, we propose a novel online algorithm with dynamic preamble distribution over multiple priorities. In the algorithm, we estimate the number of activated devices in each priority based on Bayesian rule to online control the number of preambles in each priority. Subsequently, we extend our proposal to incorporate access class baring (ACB) to optimize the algorithm. Extensive simulations show the effectiveness of proposed algorithm over multiple priorities.

Session Chair

Tony T. Luo (Missouri University of Science and Technology)

Session 9-B

Data Management

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 9 Thu, 11:00 AM — 12:30 PM PDT

A Randomly Accessible Lossless Compression Scheme for Time-Series Data

Rasmus Vestergaard, Daniel E. Lucani and Qi Zhang (Aarhus University, Denmark)

0
We detail a practical compression scheme for lossless compression of time-series data, based on the emerging concept of generalized deduplication. As data is no longer stored for just archival purposes, but needs to be continuously accessed in many applications, the scheme is designed for low-cost random access to its compressed data, avoiding decompression. With this method, an arbitrary bit of the original data can be read by accessing only a few hundred bits in the worst case, several orders of magnitude fewer than state-of-the-art compression schemes. Subsequent retrieval of bits requires visiting at most a few tens of bits. A comprehensive evaluation of the compressor on eight real-life data sets from various domains is provided. The cost of this random access capability is a loss in compression ratio compared with the state-of-the-art compression schemes BZIP2 and 7z, which can be as low as 5% depending on the data set. Compared to GZIP, the proposed scheme has a better compression ratio for most of the data sets. Our method has massive potential for applications requiring frequent random accesses, as the only existing approach with comparable random access cost is to store the data without compression.

On the Optimal Repair-Scaling Trade-off in Locally Repairable Codes

Si Wu and Zhirong Shen (The Chinese University of Hong Kong, China); Patrick Pak-Ching Lee (The Chinese University of Hong Kong, Hong Kong)

0
How to improve the repair performance of erasure-coded storage is a critical issue for maintaining high reliability of modern large-scale storage systems. Locally repairable codes (LRC) are one popular family of repair-efficient erasure codes that mitigate the repair bandwidth and are deployed in practice. To adapt to the changing demands of access efficiency and fault tolerance, modern storage systems also conduct frequent scaling operations on erasure-coded data. In this paper, we analyze the optimal trade-off between the repair and scaling performance of LRC in clustered storage systems. Specifically, we design placement strategies that operate along the optimal repair-scaling trade-off curve subject to the fault tolerance constraints. We prototype and evaluate our placement strategies on a LAN testbed, and show that they outperform the conventional placement scheme in repair and scaling operations.

URSAL: Ultra-Efficient, Reliable, Scalable, and Available Block Storage at Low Cost

Huiba Li (NiceX Lab, China); Yiming Zhang (NUDT & NiceX Lab, China); Haonan Wang (NiceX Lab, China); Ping Zhong (CSU, China)

0
In this paper we design URSAL, an HDD-only block store which provides ultra efficiency, reliability, scalability and availability at low cost. First, we unveil that parallelism is harmful to the performance of HDDs, and thus URSAL designs its storage servers which conservatively performs parallel I/O on HDDs to reduce tail latency. Second, the effectiveness of HDD journals for backup writes varies over different workloads, and thus URSAL collaboratively performs direct block I/O on raw disks and transforms small writes into sequential journal appends. Third, software failure ratios are nontrivial in large-scale block stores, and thus URSAL designs the (software) fault-tolerance mechanism for high availability. Micro benchmarks show that URSAL significantly outperforms state-of-the-art systems for providing HDD-only block storage.

Working Set Theorems for Routing in Self-Adjusting Skip List Networks

Chen Avin (Ben-Gurion University of the Negev, Israel); Iosif Salem and Stefan Schmid (University of Vienna, Austria)

2
This paper explores the design of dynamic network topologies which adjust to the workload they serve, in a demand-aware and online manner. Such self-adjusting networks (SANs) are enabled by emerging optical technologies, and can be found, e.g., in datacenters. SANs can be used to reduce routing costs by moving frequently communicating nodes topologically closer. However, such reconfigurations also come at a cost, introducing a need for online algorithms which strike an optimal balance between the benefits and costs of reconfigurations.

This paper presents SANs which provide, for the first time, provable working set guarantees: the routing cost between node pairs is proportional to how recently these nodes communicated last time. Our SANs rely on a distributed implementation of skip lists (which serves as the topology) and provide additional interesting properties such as local routing. Our first contribution is SASL^2, which is a randomized and sequential SAN algorithm that achieves the working set property. Then we show how SASL^2 can be converted to a distributed algorithm that handles concurrent communication requests and maintains SASL^2's properties. Finally, we present deterministic SAN algorithms.

Session Chair

Chunsheng Xin (Old Dominion University)

Session 9-C

Security V

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 9 Thu, 11:00 AM — 12:30 PM PDT

Lightweight Sybil-Resilient Multi-Robot Networks by Multipath Manipulation

Yong Huang, Wei Wang, Yiyuan Wang and Tao Jiang (Huazhong University of Science and Technology, China); Qian Zhang (Hong Kong University of Science and Technology, Hong Kong)

1
Wireless networking opens up many opportunities to facilitate miniaturized robots in collaborative tasks, while the openness of wireless medium exposes robots to the threats of Sybil attackers, who can break the fundamental trust assumption in robotic collaboration by forging a large number of fictitious robots. Recent advances advocate the adoption of bulky multi-antenna systems to passively obtain fine-grained physical layer signatures, rendering them unaffordable to miniaturized robots. To overcome this conundrum, this paper presents ScatterID, a lightweight system that attaches featherlight and batteryless backscatter tags to single-antenna robots to defend against Sybil attacks. Instead of passively “observing” signatures, ScatterID actively “manipulates” multipath propagation by using backscatter tags to intentionally create rich multipath features obtainable to a single-antenna robot. These features are used to construct a distinct profile to detect the real signal source, even when the attacker is mobile and power-scaling. We implement ScatterID on the iRobot Create platform and evaluate it in typical indoor and outdoor environments. The experimental results show that our system achieves a high AUROC of 0.988 and an overall accuracy of 96.4% for identity verification.

RF-Rhythm: Secure and Usable Two-Factor RFID Authentication

Chuyu Wang (Nanjing University, China); Ang Li, Jiawei Li, Dianqi Han and Yan Zhang (Arizona State University, USA); Jinhang Zuo (Carnegie Mellon University, USA); Rui Zhang (University of Delaware, USA); Lei Xie (Nanjing University, China); Yanchao Zhang (Arizona State University, USA)

3
Passive RFID technology is widely used in user authentication and access control. We propose RF-Rhythm, a secure and usable two-factor RFID authentication system with strong resilience to lost/stolen/cloned RFID cards. In RF-Rhythm, each legitimate user performs a sequence of taps on his/her RFID card according to a self-chosen secret melody. Such rhythmic taps can induce phase changes in the backscattered signals, which the RFID reader can detect to recover the user's tapping rhythm. In addition to verifying the RFID card's identification information as usual, the backend server compares the extracted tapping rhythm with what it acquires in the user enrollment phase. The user passes authentication checks if and only if both verifications succeed. We also propose a novel phase-hopping protocol in which the RFID reader emits Continuous Wave (CW) with random phases for extracting the user's secret tapping rhythm. Our protocol can prevent a capable adversary from extracting and then replaying a legitimate tapping rhythm from sniffed RFID signals. Comprehensive user experiments confirm the high security and usability of RF-Rhythm with false-positive and false-negative rates close to zero.

SeVI: Boosting Secure Voice Interactions with Smart Devices

Xiao Wang and Hongzi Zhu (Shanghai Jiao Tong University, China); Shan Chang (Donghua University, China); Xudong Wang (Shanghai Jiao Tong University, China)

0
Voice interaction, as an emerging human-computer interaction method, has gained great popularity, especially on smart devices. However, due to the open nature of voice signals, voice interaction may cause privacy leakage. In this paper, we propose a novel scheme, called \emph{SeVI}, to protect voice interaction from being deliberately or unintentionally eavesdropped. SeVI actively generate jamming noise of superior characteristics, while a user is performing voice interaction with his/her device, so that attackers cannot obtain the voice contents of the user. Meanwhile, the device leverages the prior knowledge of the generated noise to adaptively cancel received noise, even when the device usage environment is changing due to movement, so that the user voice interactions are unaffected. SeVI relies on only normal microphone and speakers and can be implemented as light-weight software. We have implemented SeVI on a commercial off-the-shelf (COTS) smartphone and conducted extensive real-world experiments. The results demonstrate that SeVI can defend both online eavesdropping attacks and offline digital signal processing (DSP) analysis attacks.

Towards Context Address for Camera-to-Human Communication

Siyuan Cao, Habiba Farrukh and He Wang (Purdue University, USA)

1
Although existing surveillance cameras can identify people, their utility is limited by the unavailability of any direct camera-to-human communication. This paper proposes a real-time end-to-end system to solve the problem of digitally associating people in a camera view with their smartphones, without knowing the phones' IP/MAC addresses. The key idea is using a person's unique "context features", extracted from videos, as its sole address. The context address consists of: motion features, e.g. walking velocity; and ambiance features, e.g. magnetic trend and Wi-Fi signal strength. Once receiving a broadcast packet from the camera, a user's phone accepts it only if its context address matches the phone's sensor data. We highlight three novel components in our system: (1) definition of discriminative and noise-robust ambience features; (2) effortless ambient sensing map generation; (3) a context feature selection algorithm to dynamically choose lightweight yet effective features which are encoded into a fixed-length header. Real-world and simulated experiments are conducted for different applications. Our system achieves a sending ratio of 98.5%, an acceptance precision of 93.4%, and a recall of 98.3% with ten people. We believe this is a step towards direct camera-to-human communication and will become a generic underlay to various practical applications.

Session Chair

Ning Zhang (Washington University in St. Louis)

Session 9-D

Privacy II

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 9 Thu, 11:00 AM — 12:30 PM PDT

Analysis, Modeling, and Implementation of Publisher-side Ad Request Filtering

Liang Lv (Tsinghua, China); Ke Xu (Tsinghua University, China); Haiyang Wang (University of Minnesota at Duluth, USA); Meng Shen (Beijing Institute of Technology, China); Yi Zhao (Tsinghua University, China); Minghui Li, Guanhui Geng and Zhichao Liu (Baidu, China)

2
Online advertising has been a great driving force for the Internet industry. To maintain a steady growth of advertising revenue, advertisement (ad) publishers have made great efforts to increase the impressions as well as the conversion rate. However, we notice that the results of these efforts are not as good as expected. In detail, to show more ads to the consumers, publishers have to waste a significant amount of server resources to process the ad requests that do not result in consumers' clicks. On the other hand, the increasing ads are also reducing the browsing experience of the consumers.

In this paper, we explore the opportunity to improve publishers' overall utility by handling a selective number of requests on ad servers. Particularly, we propose a publisher-side proactive ad request filtration solution Win2. Upon receiving an ad request, Win2 estimates the probability that the consumer will click if serving it. The ad request will be served if the clicking probability is above a dynamic threshold. Otherwise, it will be filtered to reduce the publisher's resource cost and improve consumer experience. We implement Win2 in Baidu's large-scale ad serving system and the evaluation results confirm its effectiveness.

Differentially Private Range Counting in Planar Graphs for Spatial Sensing

Abhirup Ghosh (Imperial College London, United Kingdom (Great Britain)); Jiaxin Ding (Shanghai Jiao Tong University, China); Rik Sarkar (University of Edinburgh, United Kingdom (Great Britain)); Jie Gao (Rutgers University, USA)

1
This paper considers the problem of privately reporting counts of events recorded by devices in different regions of the plane. Unlike previous range query methods, our approach is not limited to rectangular ranges. We devise novel hierarchical data structures to answer queries over arbitrary planar graphs. This construction relies on balanced planar separators to represent shortest paths using \(O(\log n)\) number of canonical paths. Pre-computed sums along these canonical paths allow efficient computations of 1D counting range queries along any shortest path. We make use of differential forms together with the 1D mechanism to answer 2D queries in which a range is a union of faces in the planar graph. The methods are designed such that the range queries could be answered with differential privacy guarantee on any single event, with only a poly-logarithmic error. They also allow private range queries to be performed in a distributed setup. Experimental results confirm that the methods are efficient and accurate on real data.

Message Type Identification of Binary Network Protocols using Continuous Segment Similarity

Stephan Kleber, Rens Wouter van der Heijden and Frank Kargl (Ulm University, Germany)

1
Protocol reverse engineering based on traffic traces infers the behavior of unknown network protocols by analyzing observable network messages. To perform correct deduction of message semantics or behavior analysis, accurate message type identification is an essential first step. However, identifying message types is particularly difficult for binary protocols, whose structural features are hidden in their densely packed data representation. In this paper, we leverage the intrinsic structural features of binary protocols and propose an accurate method for discriminating message types. Our approach uses a continuous similarity measure by comparing feature vectors where vector elements correspond to the fields in a message, rather than discrete byte values. This enables a better recognition of structural patterns, which remain hidden when only exact value matches are considered. We combine Hirschberg alignment with DBSCAN as cluster algorithm to yield a novel inference mechanism. By applying novel autoconfiguration schemes, we do not require manually configured parameters for the analysis of an unknown protocol, as required by earlier approaches. Results of our evaluations show that our approach has considerable advantages in message type identification result quality but also execution performance over previous approaches.

Search Me in the Dark: Privacy-preserving Boolean Range Query over Encrypted Spatial Data

Xiangyu Wang and Jianfeng Ma (Xidian University, China); Ximeng Liu (Fuzhou University, China); Robert Deng (Singapore Management University, Singapore); Yinbin Miao, Dan Zhu and Zhuoran Ma (Xidian University, China)

1
With the increasing popularity of geo-positioning technologies and mobile Internet, spatial keyword data services have attracted growing interest from both the industrial and academic communities in recent years. Meanwhile, massive amount of data is increasingly being outsourced to cloud in the encrypted form for enjoying the advantages of cloud computing while without compromising data privacy. Most existing works primarily focus on the privacy-preserving schemes for either spatial or keyword queries, and they cannot be directly applied to solve the spatial keyword query problem over encrypted data. In this paper, for the first time, we study the challenging problem of Privacy-preserving Boolean Range Query (PBRQ) over encrypted spatial databases. In particular, we propose two novel PBRQ schemes. Firstly, we present a scheme with linear search complexity based on the space-filling curve code and Symmetric-key Hidden Vector Encryption (SHVE). Then, we use tree structures to achieve faster-than-linear search complexity. Thorough security analysis shows that data security and query privacy can be guaranteed during the query process. Experimental results using real-world datasets show that the proposed schemes are efficient and feasible for practical applications, which is at least 70 X faster than existing techniques in the literature.

Session Chair

Yaling Yang (Virginia Tech)

Session 9-E

Routing

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 9 Thu, 11:00 AM — 12:30 PM PDT

Cost Minimization in Multi-Path Communication under Throughput and Maximum Delay Constraints

Qingyu Liu and Haibo Zeng (Virginia Tech, USA); Minghua Chen (The City University of Hong Kong, Hong Kong); Lingjia Liu (Virginia Tech, USA)

0
We consider the scenario where a sender streams a flow at a fixed rate to a receiver across a multi-hop network. Transmission over a link incurs a cost and a delay, both of which are traffic-dependent. We study the problem of cost minimization in multi-path routing under a maximum delay constraint and a throughput requirement. The problem is important for leveraging the edge-cloud computing platform to support IoT applications, which are sensitive to three critical networking metrics, i.e., cost, maximum delay, and throughput. Our problem jointly considers the three metrics, while existing ones only account for one or two of them. Solving our problem is challenging, as (i) it is NP-complete even to find a feasible solution; (ii) directly extending existing solutions admits problem-dependent maximum delay violations that can be unbounded in certain instances. We design an approximation algorithm and an efficient heuristic. For each feasible instance, our approximation algorithm must achieve the optimal cost, violating constraints by constant ratios. Our heuristic can solve a large portion of feasible instances. The obtained solution must satisfy all constraints. We further characterize a condition under which the cost of our heuristic must be within a problem-dependent gap to the optimal.

Hop-by-Hop Multipath Routing: Choosing the Right Nexthop Set

Klaus Schneider and Beichuan Zhang (University of Arizona, USA); Lotfi Benmohamed (National Institute of Standards and Technology, USA)

0
The Internet can be made more efficient and robust with hop-by-hop multipath routing: Each router on the path can split packets between multiple nexthops in order to 1) avoid failed links and 2) reduce traffic on congested links. Before deciding how to split traffic, one first needs to decide which nexthops to allow at each step. In this paper, we investigate the requirements and trade-offs for making this choice. Most related work chooses the viable nexthops by applying the "Downward Criterion", i.e., only adding nexthops that lead closer to the destination; or more generally by creating a Directed Acyclic Graph (DAG) for each destination. We show that a DAG's nexthop options are necessarily limited, and that, by using certain links in both directions (per destination), we can add further nexthops while still avoiding loops. Our solution LFID (Loop- Free Inport-Dependent) routing, though having a slightly higher time complexity, leads to both a higher number of and shorter potential paths than related work. LFID thus protects against a higher percentage of single and multiple failures (or congestions) and comes close to the performance of arbitrary source routing.

Joint Power Routing and Current Scheduling in Multi-Relay Magnetic MIMO WPT System

Hao Zhou, Wenxiong Hua, Jialin Deng, Xiang Cui, Xiang-Yang Li and Panlong Yang (University of Science and Technology of China, China)

0
Magnetic resonant coupling wireless power transfer (MRC-WPT) enables convenient device-charging. When MIMO MRC-WPT system incorporated with multiple relay components, both relay \emph{On-Off} state (\ie, \emph{power routing}) and TX current (\ie, \emph{current scheduling}) could be adjusted for improving charging efficiency and distance. Previous approaches need the collaboration and feedback from the energy receiver (RX), achieved using side-channels, \eg, Bluetooth, which is time/energy-consuming. In this work we propose, design, and implement a multi-relay MIMO MRC-WPT system, and design an almost optimum joint optimization of power routing and current scheduling method, without relying on any feedback from RX. We carefully decompose the joint optimization problem into two subproblems without affecting the overall optimality of the combined solution. For current scheduling subproblem, we propose an almost-optimum RX-feedback independent solution. For power routing subproblem, we first design a greedy algorithm with \{\frac{1}{2}\} approximation ratio, and then design a DQN based method to further improve its effectiveness. We prototype our system and evaluate it with extensive experiments. Our results demonstrate the effectiveness of the proposed algorithms. The achieved power transfer efficiency (PTE) on average is \{3.2X\}, \{1.43X\}, \{1.34X\}, and \{7.3X} over the other four strategies: without relay, with nonadjustable relays, greed based, and shortest-path based ones.

Verifying Policy-based Routing at Internet Scale

Xiaozhe Shao and Lixin Gao (University of Massachusetts at Amherst, USA)

0
Routing policy configuration plays a crucial role in determining the path that network traffic takes to reach a destination. Network administrators/operators typically decide the routing policy for their networks/routers independently. The paths/routes resulted from these independently configured routing policies might not necessarily meet the intent of the network administrators/operators. Even the very basic network-wide properties of the routing policies such as reachability between a pair of nodes need to be verified.

In this paper, we propose a scheme that characterizes routing-policy verification problems into a Satisfiability Modulo Theories (SMT) problems. The key idea is to formulate the SMT model in a policy-aware manner so as to reduce/eliminate the mutual dependencies between variables as much as possible. Further, we reduce the size of the generated SMT model through pruning. We implement and evaluate the policy-aware model through an out-of-box SMT solver. The experimental results show that the policy-aware model can reduce the time it takes to perform verification by as much as 100x even under a modest topology size. It takes only a few minutes to answer a query for a topology containing tens of thousands of nodes.

Session Chair

Jie Wu (Temple University)

Session 9-F

LoRa

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 9 Thu, 11:00 AM — 12:30 PM PDT

CoLoRa: Enable Muti-Packet Reception in LoRa

Shuai Tong, Zhenqiang Xu and Jiliang Wang (Tsinghua University, China)

1
Long Range (LoRa), more generically Low-Power Wide Area Network (LPWAN), is a promising platform to connect Internet of Things. It enables low-cost low-power communication at a few kbps over upto tens of kilometers with 10-year battery lifetime. However, practical LPWAN deployments suffer from collisions given the dense deployment of devices and wide coverage area.

We propose CoLoRa, a protocol to decompose large numbers of concurrent transmissions from one collision in LoRa networks. At the heart of CoLoRa, we utilize packet time offset to disentangle collided packets. CoLoRa incorporates several novel techniques to address practical challenges. (1) We translate time offset, which is difficult to measure, to frequency features that can be reliably measured. (2) We propose a method to cancel inter-packet interference and extract accurate feature from low SNR LoRa signal. (3) We address frequency shift incurred by CFO and time offset for LoRa decoding. We implement CoLoRa on USRP N210 and evaluate its performance in both indoor and outdoor networks. CoLoRa is implemented in software at the base station and it can work for COTS LoRa nodes. The evaluation results show that CoLoRa improves the network throughput by 3.4\(\times\) compared with Choir and by 14\(\times\) compared with LoRaWAN.

DyLoRa: Towards Energy Efficient Dynamic LoRa Transmission Control

Yinghui Li, Jing Yang and Jiliang Wang (Tsinghua University, China)

1
LoRa has been shown as a promising platform to provide low-power long-range communication with a low data rate for connecting IoT devices. LoRa can adjust transmission parameters including transmission power and spreading factor, leading to different noise resilience, transmission range and energy consumption. Existing LoRa transmission control approaches can hardly achieve optimal energy efficiency. This leads to a gap to the optimal solution. In this paper, we propose DyLoRa, a dynamic LoRa transmission control system to optimize energy efficiency. The main challenge is very limited data rate of LoRa, making it time- and energy-consuming to obtain link statistics. We show that the demodulation symbol error rate can be stable and thus derive the model for symbol error rate. We further derive the energy efficiency model based on the symbol error model. DyLoRa can derive parameter settings for optimal energy efficiency even from a single packet. We also adapt the model to different hardware to compensate the deviation. We implement DyLoRa based on LoRaWAN 1.0.2 with SX1276 LoRa node and SX1301 LoRa gateway. We evaluate DyLoRa with 11 real deployed nodes. The evaluation results show that DyLoRa improves the energy efficiency by up to 103% compared with the state-of-the-art LoRaWAN ADR.

LiteNap: Downclocking LoRa Reception

Xianjin Xia and Yuanqing Zheng (The Hong Kong Polytechnic University, Hong Kong); Tao Gu (RMIT University, Australia)

2
This paper presents LiteNap which improves the energy efficiency of LoRa by enabling LoRa nodes to operate in a downclocked `light sleep' mode for packet reception. A fundamental limit that prevents radio downclocking is the Nyquist sampling theorem which demands the clock-rate being at least twice the bandwidth of LoRa chirps. Our study reveals under-sampled LoRa chirps suffer frequency aliasing and cause ambiguity in symbol demodulation. LiteNap addresses the problem by leveraging an empirical observation that the hardware of LoRa radio can cause phase jitters on modulated chirps, which result in frequency leakage in the time domain. The timing information of phase jitters and frequency leakages can serve as physical fingerprints to uniquely identify modulated chirps. We propose a scheme to reliably extract the fingerprints from under-sampled chirps and resolve ambiguities in symbol demodulation. We implement LiteNap on a software defined radio platform and conduct trace-driven evaluation. Experiment results show that LiteNap can downclock LoRa nodes to sub-Nyquist rates for energy savings (\eg, 1/8 of Nyquist rate), without substantially affecting packet reception performance (\eg, $>$95% packet reception rate).

Online Concurrent Transmissions at LoRa Gateway

Zhe Wang, Linghe Kong and Kangjie Xu (Shanghai Jiao Tong University, China); Liang He (University of Colorado Denver, USA); Kaishun Wu (Shenzhen University, China); Guihai Chen (Shanghai Jiao Tong University, China)

1
Long Range (LoRa) communication, thanks to its wide network coverage and low energy operation, has attracted extensive attentions from both academia and industry. However, existing LoRa-based Wide Area Network (LoRaWAN) suffers from severe inter-network interference, due to the following two reasons. First, the densely-deployed LoRa ends usually share the same network configurations, such as spreading factor (SF), bandwidth (BW) and carrier frequency (CF), causing interference when operating in the vicinity. Second, LoRa is tailored for low-power devices, which excludes LoRaWAN from using the listen-before-talk (LBT) mechanisms- LoRaWAN has to use the duty-cycled medium access policy and thus being incapable of channel sensing or collision avoidance. To mitigate the inter-network interference, we propose a novel solution achieving the online concurrent transmissions at LoRa gateway, called OCT, which can be easily deployed at LoRa gateway. We have implemented/evaluated OCT on USRP platform and commodity LoRa ends, showing OCT achieves: (i) >90% packet reception rate (PRR), (ii) 3× 10−3 bit error rate (BER), (iii) 2x and 3x throughput in the scenarios of two- and three- packet collisions respectively, and (iv) reducing 67% latency compared with state-of-the-art.

Session Chair

Swarun Kumar (Carnegie Mellon University)

Session 9-G

SDN IV

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 9 Thu, 11:00 AM — 12:30 PM PDT

HiFi: Hybrid Rule Placement for Fine-Grained Flow Management in SDNs

Gongming Zhao and Hongli Xu (University of Science and Technology of China, China); Jingyuan Fan (State University of New York at Buffalo, USA); Liusheng Huang (University of Science and Technology of China, China); Chunming Qiao (University at Buffalo, USA)

3
Fine-grained flow management is very useful in many practical applications, e.g., resource allocation, anomaly detection and traffic engineering. However, it is difficult to provide fine-grained management for a large number of flows in SDNs due to switches' limited flow table capacity. While using wildcard rules can reduce the number of flow entries needed, it cannot fully ensure fine-grained management for all the flows without degrading application performance. In this paper, we design and implement HiFi, a system that achieves fine-grained management with a minimal number of flow entries. HiFi achieves the goal by taking a two-step approach: wildcard entry installment and application-specific exact-match entry installment. How to optimally install wildcard and exact-match flow entries, however, are intractable. Therefore, we design approximation algorithms with bounded factors to solve these problems. We consider how to achieve network-wide load balancing via fine-grained flow management as a case study. Both experimental and simulation results show that HiFi can reduce the number of required flow entries by about 45%-69% and reduce the control overhead by 28%-50% compared with the state-of-the-art approaches for achieving fine-grained flow management.

Homa: An Efficient Topology and Route Management Approach in SD-WAN Overlays

Diman Zad Tootaghaj and Faraz Ahmed (Hewlett Packard Labs, USA); Puneet Sharma (Hewlett Packard Labs & HP Labs, USA); Mihalis Yannakakis (Columbia University, USA)

1
This paper presents an efficient topology and route management approach in Software-Defined Wide Area Networks (SD-WAN). Traditional WANs suffer from low utilization and lack of global view of the network. Therefore, during failures, topology/service/traffic changes, or new policy requirements, the system does not always converge to the global optimal state. Using Software Defined Networking architectures in WANs provides the opportunity to design WANs with higher fault tolerance, scalability, and manageability. We exploit the correlation matrix derived from monitoring system between the virtual links to infer the underlying route topology and propose a route update approach that minimizes the total route update cost on all flows. We formulate the problem as an integer linear programming optimization problem and provide a centralized control approach that minimizes the total cost while satisfying the quality of service (QoS) on all flows. Experimental results on real network topologies demonstrate the effectiveness of the proposed approach in terms of disruption cost and average disrupted flows.

Incremental Server Deployment for Scalable NFV-enabled Networks

Jianchun Liu, Hongli Xu and Gongming Zhao (University of Science and Technology of China, China); Chen Qian (University of California at Santa Cruz, USA); Xingpeng Fan and Liusheng Huang (University of Science and Technology of China, China)

3
To construct a new NFV-enabled network, there are two critical requirements: minimizing server deployment cost and satisfying switch resource constraints. However, prior work mostly focuses on the server deployment cost, while ignoring the switch resource constraints (e.g., switch's flow-table size). It thus results in a large number of rules on switches and leads to massive control overhead. To address this challenge, we propose an incremental server deployment (INSD) problem for construction of scalable NFV-enabled networks. We prove that the INSD problem is NP-Hard, and there is no polynomial-time algorithm with approximation ratio of (1−\epsilon)·ln m, where \epsilon is an arbitrarily small value and m is the number of requests in the network. We then present an efficient algorithm with an approximation ratio of 2·H(q·p), where q is the number of VNF's categories and p is the maximum number of requests through a switch. We evaluate the performance of our algorithm with experiments on physical platform (Pica8), Open vSwitches (OVSes), and large-scale simulations. Both experiment and simulation results show high scalability of the proposed algorithm. For example, our solution can reduce the control and rule overhead by about 88% with about 5% additional server deployment, compared with the existing solutions.

Network Slicing in Heterogeneous Software-defined RANs

Qiaofeng Qin (Yale University, USA); Nakjung Choi (Nokia & Bell Labs, USA); Muntasir Raihan Rahman (Microsoft, USA); Marina Thottan (Bell Labs, USA); Leandros Tassiulas (Yale University, USA)

1
5G technologies promise to revolutionize mobile networks and push them to the limits of resource utilization. Besides better capacity, we also need better resource management via virtualization. End-to-end network slicing not only involves the core but also the Radio Access Network (RAN) which makes this a challenging problem. This is because multiple alternative radio access technologies exist (e.~g.~,LTE, WLAN, and WiMAX), and there is no unifying abstraction to compare and compose from diverse technologies. In addition, existing work assumes the all RAN infrastructure exists under a single administrative domain. Software-Defined Radio Access Network (SD-RAN) offers programmability that facilitates a unified abstraction for resource sharing and composition across multiple providers harnessing different technology stacks. In this paper we propose a new architecture for heterogeneous RAN slicing across multiple providers. A central component in our architecture is a service orchestrator that interacts with multiple network providers and service providers to negotiate resource allocations that are jointly optimal. We propose a double auction mechanism that captures the interaction among selfish parties and guarantees convergence to optimal social welfare in finite time. We then demonstrate the feasibility of our proposed system by using open source SD-RAN systems such as EmPOWER (WiFi) and FlexRAN (LTE).

Session Chair

Tamer Nadeem (Virginia Commonwealth University)

Session Coffee-Break-4-PM

Virtual Coffee Break

Conference
3:30 PM — 4:00 PM EDT
Local
Jul 9 Thu, 12:30 PM — 1:00 PM PDT

Virtual Coffee Break

N/A

0
This talk does not have an abstract.

Session Chair

N/A

Session 10-A

Localization II

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 9 Thu, 1:00 PM — 2:30 PM PDT

A Structured Bidirectional LSTM Deep Learning Method For 3D Terahertz Indoor Localization

Shukai Fan, Yongzhi Wu and Chong Han (Shanghai Jiao Tong University, China); Xudong Wang (Shanghai Jiao Tong University & Teranovi Technologies, Inc., China)

0
High-accuracy localization technology has gained increasing attention in the diverse applications gesture and motion control, among others. Due to the shadowing, multi-path fading, blockage effects in indoor propagation, 0.1m-level precise localization is still challenging. Promising for 6G wireless communications, the Terahertz (THz) spectrum provides ultra-broad bandwidth for indoor applications. Applying to indoor localization, the channel state information (CSI) of THz wireless signals, including angle of arrival (AoA), received power, and delay, has high resolution, which can be explored for positioning. In this paper, a Structured Bidirectional Long Short-term Memory (SBi-LSTM) recurrent neural network (RNN) architecture is proposed to solve the CSI-based three-dimensional (3D) THz indoor localization problem with significantly improved accuracy. First, the features of individual multi-path ray are analyzed in the Bi-LSTM network at the base level. Furthermore, the upper level residual network (ResNet) of the constructed SBi-LSTM network extracts for the geometric information for localization. Simulation results validate the convergence of our SBi-LSTM method and the robustness against indoor non-line-of-sight (NLoS) blockage. Specifically, the localization accuracy in the metric of mean distance error is within 0.27m under the NLoS environment, which demonstrates over 60% enhancement over the state-of-the-art techniques.

MagB: Repurposing the Magnetometer for Fine-Grained Localization of IoT Devices

Paramasiven Appavoo and Mun Choon Chan (National University of Singapore, Singapore); Bhojan Anand (National University of Singapore & Anuflora International, Singapore)

1
Interest in fine-grained indoor localization remains high and various approaches including those based on Radio Frequency (RF), ultrasound, acoustic, magnetic field and light have been proposed. However, while the achieved accuracy may be high, many of these approaches do not work well in environments with lots of obstructions.In this paper, we present MagB, a decimeter-level localization scheme that uses the magnetometer commonly available on existing IoT devices. MagB estimates the bearing of beacons by detecting changes in the magnetic field strength. Localization is then performed based on Angle-of-Arrival (AoA) information. We have built a prototype of MagB using low cost, off-the-shelf components. Our evaluation shows that MagB is able to achieve a median accuracy of about 13cm and can localize devices even when they are placed in steel filing cabinet or inside the casing of a running PC.

mmTrack: Passive Multi-Person Localization Using Commodity Millimeter Wave Radio

Chenshu Wu, Feng Zhang, Beibei Wang and K. J. Ray Liu (University of Maryland, USA)

2
Passive human localization and tracking using RF signals has been studied for over a decade. Most of existing solutions, however, can only track a single moving subject due to the coarse multipath resolvability limited by bandwidth and antenna number. In this paper, we breakdown the limitations by leveraging the emerging 60GHz 802.11ad radios. We present mmTrack, the first system that passively localizes and tracks multiple users simultaneously using a single commodity 802.11ad radio. The design of mmTrack consists of three key components. First, we significantly improve the spatial resolution, limited by the small aperture of the compact 60GHz array, by performing digital beamforming over all receive antennas. Second, we propose a novel multi-target detection approach that tackles the near-far-effect and measurement noises. Finally, we devise a robust clustering technique to accurately recognize multiple targets and estimate the respective locations, from which their individual trajectories are further derived by a continuous tracking algorithm. We implement mmTrack on commodity 802.11ad devices and evaluate it in indoor environments. Experiments demonstrate that mmTrack counts multiple users precisely with an error <1 person for 97.8% of the time and achieves a respective median location error of 9.9cm and 19.7cm for dynamic and static targets.

Selection of Sensors for Efficient Transmitter Localization

Arani Bhattacharya (KTH Royal Institute of Technology, Sweden); Caitao Zhan, Himanshu Gupta, Samir R. Das and Petar M. Djurić (Stony Brook University, USA)

1
We address the problem of localizing an (illegal) transmitter using a distributed set of sensors. Our focus is on developing techniques that perform the transmitter localization in an efficient manner. Localization of illegal transmitters is an important problem which arises in many important applications. Localization of transmitters is generally done based on observations from a deployed set of sensors with limited resources, thus it is imperative to design techniques that minimize the sensors' energy resources.

In this paper, we design greedy approximation algorithms for the optimization problem of selecting a given number of sensors in order to maximize an appropriately defined objective function of localization accuracy. The obvious greedy algorithm delivers a constant-factor approximation only for the special case of two hypotheses (potential locations). For the general case of multiple hypotheses, we design a greedy algorithm based on an appropriate auxiliary objective function---and show that it delivers a provably approximate solution for the general case. We evaluate our techniques over multiple simulation platforms, including an indoor as well as an outdoor testbed, and demonstrate the effectiveness of our designed techniques---our techniques easily outperform prior and other approaches by up to 50-60% in large-scale simulations.

Session Chair

Tamer Nadeem (Virginia Commonwealth University)

Session 10-B

Adaptive Algorithms

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 9 Thu, 1:00 PM — 2:30 PM PDT

Automatically and Adaptively Identifying Severe Alerts for Online Service Systems

Nengwen Zhao (Tsinghua University, China); Panshi Jin, Lixin Wang and Xiaoqin Yang (China Construction Bank, China); Rong Liu (Stevens Institute of Technology, USA); Wenchi Zhang and Kaixin Sui (Bizseer Technology Co., Ltd., China); Dan Pei (Tsinghua University, China)

1
In large-scale online service system, to enhance the quality of services, engineers need to collect various monitoring data and write many rules to trigger alerts. However, the number of alerts is way more than what on-call engineers can investigate. Thus, in practice, alerts are classified into several priority levels using manual rules, and on-call engineers primarily focus on handling the alerts with the highest priority level (i.e., severe alerts). Unfortunately, due to the complex and dynamic nature of the online services, this rule-based approach results in missed severe alerts or wasted troubleshooting time on non-severe alerts. In this paper, we propose AlertRank, an automatic and adaptive framework for identifying severe alerts. Specifically, AlertRank extracts a set of powerful and interpretable features (textual and temporal alert features, univariate and multivariate anomaly features for monitoring metrics), adopts an XGBoost ranking algorithm to identify the severe alerts out of all incoming alerts, and uses novel methods to obtain labels for both training and testing. Experiments on the datasets from a top global commercial bank demonstrate that AlertRank is effective and achieves the F1-score of 0.89 on average, outperforming all baselines, and reduces the number of alerts to be investigated by more than 80%.

On the impact of accurate radio link modeling on the performance of WirelessHART control networks

Yuriy Zacchia Lun (IMT School for Advanced Studies Lucca, Italy); Claudia Rinaldi, Amal Alrish and Alessandro D'Innocenzo (University of L'Aquila, Italy); Fortunato Santucci (University of l'Aquila, Italy)

2
The challenges in analysis and co-design of wireless networked control systems are well highlighted by considering wireless industrial control protocols. In this perspective, this paper addresses the modeling and design challenge by focusing on WirelessHART, which is a networking protocol stack widely adopted for wireless industrial automation. Specifically, we first develop and validate a Markov channel model that abstracts the WirelessHART radio link subject to channel impairments and interference. The link quality metrics introduced in the theoretical framework are validated in order to enable the accurate representation of the average and extreme behavior of the radio link. By adopting these metrics, it is straightforward to handle a consistent finite-state abstraction. On the basis of such a model, we then derive a stationary Markov jump linear system model that captures the dynamics of a control loop closed over the radio link. Subsequently, we show that our modeling framework is able to discover and manage the challenging subtleties arising from bursty behavior. A relevant theoretical outcome consists in designing a controller that guarantees stability and improves control performance of the closed-loop system, where other approaches based on a simplified channel model fail.

Online Spread Estimation with Non-duplicate Sampling

Yu-e Sun and He Huang (Soochow University, China); Chaoyi Ma and Shigang Chen (University of Florida, USA); Yang Du (University of Science and Technology of China, China); Qingjun Xiao (SouthEast University of China, China)

1
Per-flow spread measurement in high-speed networks has many practical applications. Most prior work is sketch-based, focusing on reducing their space requirements to fit in on-chip memory. This design allows measurement to be performed at the line rate, but it has to accept tradeoff with expensive computation for spread queries (unsuitable for online operations) and large estimation errors for small flows. This paper complements the prior art with a new spread estimator design based on an on-chip/off-chip model which is common in practice. The new estimator supports online queries in real-time and produces spread estimation with better accuracy. By storing traffic data in off-chip memory, our design faces a key technical challenge of efficient non-duplicate sampling. We propose a two-stage solution with on-chip/off-chip data structures and algorithms, which are not only efficient but also highly configurable for a variety of probabilistic performance guarantees. We implemented our estimator in hardware using FPGA. The experiment results based on real traces show that our estimator increases the query throughput by around three orders of magnitude, reduces the mean relative (absolute) error by around two (one) orders of magnitude, and saves 84% on-chip memory for probabilistic guarantee in flow classification compared to the prior art.

Session Chair

Evgeny Khorov (IITP RAS)

Session 10-C

Security VI

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 9 Thu, 1:00 PM — 2:30 PM PDT

ADA: Adaptive Deep Log Anomaly Detector

Yali Yuan (University of Goettingen, Germany); Sripriya Srikant Adhatarao (Uni Goettingen, Germany); Mingkai Lin (Nanjing University, China); Yachao Yuan (University of Goettingen, Germany); Zheli Liu (Nankai University, China); Xiaoming Fu (University of Goettingen, Germany)

0
Large private and government networks are often subjected to attacks like data extrusion and service disruption. Existing anomaly detection systems use offline supervised learning and hence cannot detect anomalies in real-time. Even though unsupervised algorithms are increasingly used, they cannot readily adapt to newer threats. Moreover, such systems also suffer from high cost of storage and require extensive computational resources in addition to employing experts for labeling. In this paper, we propose ADA: Adaptive Deep Log Anomaly Detector, an unsupervised online deep neural network framework that leverages LSTM networks. We regularly adapt to new log patterns to ensure accurate anomaly detection. We also design an adaptive model selection strategy to choose parito-optimal configurations and thereby utilize resources efficiently. Further, we propose a dynamic threshold algorithm to dictate the optimal threshold based on recently detected events to improve the detection accuracy. We then use the predictions to guide storage of abnormal data and effectively reduce the overall storage cost. We compare ADA with the state-of-the-art using the Los Alamos National Laboratory cyber security dataset and show that ADA accurately detects anomalies with high F1-score ~95% and it is 97 times faster than existing approaches and incurs very low storage cost.

DFD: Adversarial Learning-based Approach to Defend Against Website Fingerprinting

Ahmed Abusnaina (University of Central Florida, USA); RhongHo Jang (Inha University, Korea (South) & University of Central Florida, USA); Aminollah Khormali (University of Central Florida, USA); Daehun Nyang (Ewha Womans University & TheVaulters Company, Korea (South)); David Mohaisen (University of Central Florida, USA)

2
Website Fingerprinting (WF) attacks allow an adversary to recognize the visited websites by exploiting and analyzing network traffic patterns. The success rate of WF attacks is highly dependent on the set of network traffic features used to build the fingerprint. Such features can be used to launch a machine/deep learning-based WF attack which can break the existing state-of-the-art defense mechanisms. In this paper, we use an adversarial learning technique to present a novel defense mechanism, Deep Fingerprinting Defender (DFD), against deep learning-based WF attacks. The DFD aims to break the inherent pattern of the Tor users' online activity through the careful injection of dummy patterns in specific locations in a packet flow. We designed two configurations for dummy message injection, the one-way injection and two-way injection. We conducted extensive experiments to evaluate the performance of DFD over both closed-world and open-world settings. Our results demonstrate that these two configurations can successfully break the Tor network traffic pattern and achieve a high evasion rate of 86.02% over one-way client-side injection rate of 100%, a promising improvement in comparison with state-of-the-art adversarial trace's evasion rate of 60%. Moreover, DFD outperforms its state-of-the-art alternatives by requiring lower bandwidth overhead; 14.26% using client-side injection.

Threats of Adversarial Attacks in DNN-Based Modulation Recognition

Yun Lin, Haojun Zhao and Ya Tu (Harbin Engineering University, China); Shiwen Mao (Auburn University, USA); Zheng Dou (Harbin Engineering University, China)

1
With the emergence of the information age, mobile data has become more random, heterogeneous and massive. Thanks to its many advantages, deep learning is increasingly applied in communication fields such as modulation recognition. However, recent studies show that the deep neural networks (DNN) is vulnerable to adversarial examples, where subtle perturbations deliberately designed by an attacker can fool a classifier model into making mistakes. From the perspective of an attacker, this study adds elaborate adversarial examples to the modulation signal, and explores the threats and impacts of adversarial attacks on the DNN-based modulation recognition in different environments. The results show that regardless of a white-box or a black-box model, the adversarial attack can reduce the accuracy of the target model. Among them, the performance of the iterative attack is superior to the one-step attack in most scenarios. In order to ensure the invisibility of the attack (the waveform being consistent before and after the perturbations), an appropriate perturbation level is found without losing the attack effect. Finally, it is attested that the signal confidence level is inversely proportional to the attack success rate, and several groups of signals with high robustness are obtained.

ZeroWall: Detecting Zero-Day Web Attacks through Encoder-Decoder Recurrent Neural Networks

Ruming Tang, Zheng Yang, Zeyan Li and Weibin Meng (Tsinghua University, China); Haixin Wang (University of Science and Technology Beijing, China); Qi Li (Tsinghua University, China); Yongqian Sun (Nankai University, China); Dan Pei (Tsinghua University, China); Tao Wei (Baidu USA LLC, USA); Yanfei Xu and Yan Liu (Baidu, Inc, China)

0
Zero-day Web attacks are arguably the most serious threats to Web security, but are very challenging to detect because they are not seen or known previously and thus cannot be detected by widely-deployed signature-based Web Application Firewalls (WAFs). This paper proposes ZeroWall, an unsupervised approach, which works with an existing WAF in pipeline, to effectively detecting zero-day Web attacks. Using historical web requests allowed by an existing signature-based WAF, a vast majority of which are assumed to be benign, ZeroWall trains a self-translation machine using an encoder-decoder recurrent neural network to capture the syntax and semantic patterns of benign requests. In real-time detection, a zero-day attack request (which the WAF fails to detect), not understood well by self-translation machine, cannot be translated back to its original request by the machine, thus is declared as an attack. In our evaluation using 8 real-world traces of 1.4 billion Web requests, ZeroWall successfully detects real zero-day attacks missed by existing WAFs and achieves high F1-scores over 0.98, which significantly outperforms all baseline approaches.

Session Chair

Shucheng Yu (Stevens Institute of Technology)

Session 10-D

Network Intelligence VI

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 9 Thu, 1:00 PM — 2:30 PM PDT

An Incentive Mechanism Design for Efficient Edge Learning by Deep Reinforcement Learning Approach

Yufeng Zhan (The Hong Kong Polytechnic University, China); Jiang Zhang (University of Southern California, USA)

1
Emerging technologies and applications have generated large amounts of data at the network edge. Due to bandwidth, storage, and privacy concerns, it is often impractical to move the collected data to the cloud. With the rapid development of edge computing and distributed machine learning (ML), edge-based ML called federated learning has emerged to overcome the shortcomings of cloud-based ML. Existing works mainly focus on designing efficient learning algorithms, few works focus on designing the incentive mechanisms with heterogeneous edge nodes (EN) and uncertainty of network bandwidth. The incentive mechanisms affect various tradeoffs: (i) between computation and communication latencies, and thus (ii) between the edge learning time and payment consumption. We fill this gap by designing an incentive mechanism that captures the tradeoff between latencies and payment. Due to the network dynamics and privacy protection, we propose a deep reinforcement learning-based (DRL-based) solution that can automatically learn the best pricing strategy. To the best of our knowledge, this is the first work that applies the advances of DRL to design the incentive mechanism for edge learning. We evaluate the performance of the incentive mechanism using trace-driven experiments. The results demonstrate the superiority of our proposed approach as compared with the baselines.

Intelligent Video Caching at Network Edge: A Multi-Agent Deep Reinforcement Learning Approach

Fangxin Wang (Simon Fraser University, Canada); Feng Wang (University of Mississippi, USA); Jiangchuan Liu and Ryan Shea (Simon Fraser University, Canada); Lifeng Sun (Tsinghua University, China)

1
Today's explosively growing Internet video traffics and viewers' ever-increasing quality of experience (QoE) demands for video streaming bring tremendous pressures to the backbone network. Mobile edge caching provides a promising alternative by pushing video content closer at the network edge rather than the remote CDN servers. However, our large-scale trace analysis shows that edge caching environment is much more complicated with massively dynamic and diverse request patterns, which renders that existing rule-based and model-based caching solutions may not well fit such complicated edge environments. Our trace analysis also shows that the request similarity among neighboring edges can be highly dynamic and diverse, which can easily compromise the benefits from traditional cooperative caching mostly designed based on CDN environment. In this paper, we propose \texttt{MacoCache}, an intelligent edge caching framework that is carefully designed to afford the massively diversified and distributed caching environment to minimize both content access latency and traffic cost. Specifically, MacoCache leverages a multi-agent deep reinforcement learning (MADRL) based solution, where each edge is able to adaptively learn its own best policy in conjunction with other edges for intelligent caching. The real trace-driven evaluation further demonstrate its superiority.

Network-Aware Optimization of Distributed Learning for Fog Computing

Yuwei Tu (Zoomi Inc., USA); Yichen Ruan and Satyavrat Wagle (Carnegie Mellon University, USA); Christopher G. Brinton (Purdue University & Zoomi Inc., USA); Carlee Joe-Wong (Carnegie Mellon University, USA)

0
Fog computing holds promise of scaling machine learning tasks to network-generated datasets by distributing processing across connected devices. Key challenges to doing so, however, are heterogeneity in devices' compute resources and topology constraints on which devices can communicate. We are the first to address these challenges by developing a network-aware distributed learning optimization methodology where devices process data for a task locally and send their learnt parameters to a server for aggregation at certain time intervals. In particular, different from traditional federated learning frameworks, our method enables devices to offload their data processing tasks, with these decisions determined through a convex data transfer optimization problem which trades off costs associated with devices processing, offloading, or discarding data points. Using this model, we analytically characterize the optimal data transfer solution for different fog network topologies, showing for example that the value of a device offloading is approximately linear in the range of computing costs in the network. Our subsequent experiments on both synthetic and real-world datasets we collect confirm that our algorithms are able to improve network resource utilization substantially without sacrificing the accuracy of the learned model.

SurveilEdge: Real-time Video Query based on Collaborative Cloud-Edge Deep Learning

Shibo Wang and Shusen Yang (Xi'an Jiaotong University, China); Cong Zhao (Imperial College London, United Kingdom (Great Britain))

0
Large volumes of surveillance videos are continuously generated by ubiquitous cameras, which brings the demand of real-time queries that return the video frames with objects of certain classes with low latency and low bandwidth cost. We present SurveilEdge, a collaborative Cloud-Edge system for real-time queries of large-scale surveillance video streams. Specifically, a (convolution neural network) CNN training scheme based on the fine-tuning the technique, and an intelligent task allocator with the task scheduling and parameter adjustment algorithm are developed. We implement SurveilEdge on a prototype with multiple edge devices and a public cloud, and conduct extensive experiments based on real-world surveillance video datasets. Experimental results demonstrate that the Cloud-edge collaborative SurveilEdge manages to reduce the average latency and bandwidth cost by up to 81.7% and 86.2% (100% and 200%) respectively, traditional cloud-based (edge-based) solutions. Meanwhile, SurveilEdge balances the computing load effectively and significantly reduces the variance of per frame query latencies.

Session Chair

Onur Altintas (Toyota Motor North America, R&D InfoTech Labs)

Session 10-E

Cloud Computing

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 9 Thu, 1:00 PM — 2:30 PM PDT

Enabling Live Migration of Containerized Applications Across Clouds

Thad Benjaponpitak, Meatasit Karakate and Kunwadee Sripanidkulchai (Chulalongkorn University, Thailand)

3
Live migration, the process of transferring a running application to a different physical location with minimal downtime, can provide many benefits desired by modern cloud-based systems. Furthermore, live migration across data centers between different cloud providers enables a new level of freedom for cloud users to move their workloads around for performance or business objectives without having to be tied down to any single provider. While this vision is not new, to-date, there are few solutions and proof-of-concepts that provide this capability. As containerized applications are gaining popularity, we focus on the design and implementation of live migration of containers across cloud providers. A proof of concept live migration service between Amazon Web Services, Google Cloud Platform, and Microsoft Azure is evaluated using a common web-based workload. Our implemented service is automated, includes pre-copy optimization, connection holding, traffic redirection, and support for multiple interdependent containers, applicable to a broad range of application use cases.

Online Placement of Virtual Machines with Prior Data

David Naori (Technion, Israel); Danny Raz (Nokia and Technion, Israel)

3
The cloud computing market has a wide variety of customers that deploy various applications from deep learning to classical web services. Each application may have different computing, memory and networking requirements, and each customer may be willing to pay a different price for the service. When a request for a VM arrives, the cloud provider decides online whether or not to serve it and which resources to allocate for this purpose. The goal is to maximize the revenue while obeying the constraints imposed by the limited physical infrastructure and its layout.

Although requests arrive online, cloud providers are not entirely in the dark; historical data is readily available, and may contain strong indications regarding future requests. Thus, standard theoretical models that assume the online player has no prior knowledge are inadequate. In this paper we adopt a recent theoretical model for the design and analysis of online algorithms that allows taking such historical data into account. We develop new competitive online algorithms for multidimensional resource allocation and analyze their guaranteed performance. Moreover, using extensive simulation over real data from Google and AWS, we show that our new approach yields much higher revenue to cloud providers than currently used heuristics.

PAM & PAL: Policy-Aware Virtual Machine Migration and Placement in Dynamic Cloud Data Centers

Hugo Flores and Vincent Tran (CSUDH, USA); Bin Tang (California State University Dominguez Hills, USA)

2
In this paper we focus on policy-aware data centers (PADCs), wherein virtual machine (VM) traffic traverses a sequence of middleboxes (MBs) for security and performance purposes, and study two new VM migration and placement problems. We first study PAM: \underline{p}olicy-\underline{a}ware virtual machine \underline{m}igration. Given an existing VM placement in the PADC, a data center policy that communicating VM pairs must satisfy, and dynamic traffic rates among VMs, the goal of PAM is to migrate VMs in order to minimize the total cost of migration and communication of all the VM pairs. We design optimal, approximation, and heuristic {\em policy-aware} VM migration algorithms. Then we study PAL: \underline{p}olicy-\underline{a}ware virtual machine p\underline{l}acement, and show that it is a special case of PAM. Further, by exploiting unique structures of VM placement in PADCs, we design new PAL algorithms that are more time efficient than while achieving the optimality and approximality as those in PAM. Our experiments show that i) VM migrations reduces total communication costs of VM pairs by 30%, ii) our PAM algorithms outperform {\em only} existing policy-aware VM migration scheme by 20-30%, and iii) our PAL algorithms outperform state-of-the-art VM placement algorithm that is oblivious to data center policies by 40-50%.

SplitCast: Optimizing Multicast Flows in Reconfigurable Datacenter Networks

Long Luo (University of Electronic Science and Technology of China, China); Klaus-Tycho Foerster and Stefan Schmid (University of Vienna, Austria); Hongfang Yu (University of Electronic Science and Technology of China, China)

2
Many modern cloud applications frequently generate multicast traffic, which is becoming one of the primary communication patterns in datacenters. Emerging reconfigurable datacenter technologies enable interesting new opportunities to support such multicast traffic in the physical layer: novel circuit switches offer high-performance inter-rack multicast capabilities. However, not much is known today about the algorithmic challenges introduced by this new technology.

This paper presents SplitCast, a preemptive multicast scheduling approach that fully exploits emerging physical-layer multicast capabilities to reduce flow times. SplitCast dynamically reconfigures the circuit switches to adapt to the multicast traffic, accounting for reconfiguration delays. In particular, SplitCast relies on simple single-hop routing and leverages flexibilities by supporting splittable multicast so that a transfer can already be delivered to just a subset of receivers when the circuit capacity is insufficient. Our evaluation results show that SplitCast can reduce flow times significantly compared to state-of-the-art solutions.

Session Chair

Sangtae Ha (University of Colorado Boulder)

Session 10-F

WiFi and Wireless Sensing

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 9 Thu, 1:00 PM — 2:30 PM PDT

Joint Access Point Placement and Power-Channel-Resource-Unit Assignment for 802.11ax-Based Dense WiFi with QoS Requirements

Shuwei Qiu, Xiaowen Chu, Yiu-Wing Leung and Joseph Kee-Yin Ng (Hong Kong Baptist University, Hong Kong)

1
IEEE 802.11ax is a promising standard for the next-generation WiFi network, which uses orthogonal frequency division multiple access (OFDMA) to segregate the wireless spectrum into time-frequency resource units (RUs). In this paper, we aim at designing an 802.11ax-based dense WiFi network to provide WiFi services to a large number of users within a given area with the following objectives: (1) to minimize the number of access points (APs); (2) to fulfil the user's throughput requirement; and (3) to be resistant to AP failures. We formulate the above into a joint AP placement and power-channel-RU assignment optimization problem, which is NP-hard. To tackle this problem, we first derive an analytical model to estimate each user's throughput under the mechanism of OFDMA and a widely used interference model. We then design a heuristic algorithm to find high-quality solutions with polynomial time complexity. Simulation results show that our algorithm can achieve the optimal performance for a small area of 50 x 50 m2. For a larger area of 100 x 80 m2 where we cannot find the optimal solution through an exhaustive search, our algorithm can reduce the number of APs by 32 - 55% as compared to the random and Greedy solutions.

Machine Learning-based Spoofing Attack Detection in MmWave 60GHz IEEE 802.11ad Networks

Ning Wang and Long Jiao (George Mason University, USA); Pu Wang (Xidian University, China); Weiwei Li (Hebei University of Engineering, China & George Mason University, USA); Kai Zeng (George Mason University, USA)

1
Spoofing attacks pose a serious threat to wireless communications. Exploiting physical-layer features to counter spoofing attacks is a promising technology. Although various physical-layer spoofing attack detection (PL-SAD) techniques have been proposed for conventional 802.11 networks in the sub-6GHz band, the study of PL-SAD for 802.11ad networks in 5G millimeter wave (mmWave) 60GHz band is largely open. In this paper, we propose a unique physical layer feature in IEEE 802.11ad networks, i.e., the signal-to-noise-ratio (SNR) traces in the sector level sweep (SLS) of beam pattern selections, to achieve PL-SAD. The proposed schemes are based on the observation that each 802.11ad device presents distinctive beam patterns in the beam sweeping process, which results in distinguishable SNR traces. Based on these observations, we present a novel neural network framework, named BNFN-framework, that can tackle small samples learning and allow for quick construction. The BNFN-framework consists of a backpropagation neural network and a fast forward propagation neural network. Generative adversarial networks (GANs) are introduced to optimize these neural networks. We conduct experiments using off-the-shelf 802.11ad devices, Talon AD7200s and MG360, to evaluate the performance of the proposed PL-SAD scheme. Experimental results confirm the effectiveness of the proposed PL-SAD scheme under different scenarios.

MU-ID: Multi-user Identification Through Gaits Using Millimeter Wave Radios

Xin Yang (Rutgers University, USA); Jian Liu (The University of Tennessee, Knoxville, USA); Yingying Chen (Rutgers University, USA); Xiaonan Guo and Yucheng Xie (Indiana University-Purdue University Indianapolis, USA)

1
Multi-user identification could facilitate various large-scale identity-based services such as access control, automatic surveillance system, and personalized services, etc. Although existing solutions can identify multiple users using cameras, such vision-based approaches usually raise serious privacy concerns and require the presence of line-of-sight. Differently, in this paper, we propose MU-ID, a gait-based multi-user identification system leveraging a single commercial off-the-shelf (COTS) millimeter-wave (mmWave) radar. Particularly, MU-ID takes as input frequency-modulated continuous-wave (FMCW) signals from the radar sensor. Through analyzing the mmWave signals in the range-Doppler domain, MU-ID examines the users' lower limb movements and captures their distinct gait patterns varying in terms of step length, duration, instantaneous lower limb velocity, and inter-lower limb distance, etc. Additionally, an effective spatial-temporal silhouette analysis is proposed to segment each user's walking steps. Then, the system identifies steps using a Convolutional Neural Network (CNN) classifier and further identifies the users in the area of interest. We implement MU-ID with the TI AWR1642BOOST mmWave sensor and conduct extensive experiments involving 10 people. The results show that MU-ID achieves up to 97% single-person identification accuracy, and over 92% identification accuracy for up to four people, while maintaining a low false positive rate.

SmartBond: A Deep Probabilistic Machinery for Smart Channel Bonding in IEEE 802.11ac

Raja Karmakar and Samiran Chattopadhyay (Jadavpur University, India); Sandip Chakraborty (Indian Institute of Technology Kharagpur, India)

1
Dynamic bandwidth operation in IEEE 802.11ac helps wireless access points to tune channel widths based on carrier sensing and bandwidth requirements of associated wireless stations. However, wide channels result in a reduction in the carrier sensing range, which leads to the problem of channel sensing asymmetry. As a consequence, access points face hidden channel interference that may lead to as high as 60% reduction in the throughput under certain scenarios of dense deployments of access points. Existing approaches handle this problem by detecting the hidden channels once they occur and affect the channel access performance. In a different direction, in this paper, we develop a method for avoiding hidden channels by meticulously predicting the channel width that can reduce interference as well as can improve the average communication capacity. The core of our approach is a deep probabilistic machinery based on point process modeling over the evolution of channel width selection process. The proposed approach, SmartBond, has been implemented and deployed over a testbed with 8 commercial wireless access points. The experiments show that the proposed model can significantly improve the channel access performance although it is lightweight and does not incur much overhead during the decision making process.

Session Chair

Yuanqing Zheng (The Hong Kong Polytechnic University)

Session 10-G

Edge Computing III

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 9 Thu, 1:00 PM — 2:30 PM PDT

A Fast Hybrid Data Sharing Framework for Hierarchical Mobile Edge Computing

Junjie Xie and Deke Guo (National University of Defense Technology, China); Xiaofeng Shi and Haofan Cai (University of California Santa Cruz, USA); Chen Qian (University of California at Santa Cruz, USA); Honghui Chen (National University of Defense Technology, China)

0
Edge computing satisfies the stringent latency requirements of data access and processing for applications running on edge devices. The data location service is a key function to provide data storage and retrieval to enable these applications. However, it still lacks research of a scalable and low-latency data location service in mobile edge computing. Meanwhile, the existing solutions, such as DNS and DHT, fail to meet the low latency requirement of mobile edge computing. This paper presents a low-latency hybrid data sharing framework, HDS, in which the data location service is divided into two parts: intra-region and inter-region. In the intra-region part, we design a data sharing protocol called Cuckoo Summary to achieve fast data localization. In the inter-region part, we design a geographic routing based scheme to achieve efficient data localization with only one overlay hop. The advantages of HDS include short response latency, low implementation overhead, and few false positives. We implement our HDS framework based on a P4 prototype. The experimental results show that, compared to the state-of-the-art solutions, our design achieves 50.21% shorter lookup paths and 92.75% fewer false positives.

Data-driven Distributionally Robust Optimization for Edge Intelligence

Zhaofeng Zhang, Sen Lin, Mehmet Dedeoglu, Kemi Ding and Junshan Zhang (Arizona State University, USA)

2
The past few years have witnessed the explosive growth of Internet of Things (IoT) devices. The necessity of real-time edge intelligence for IoT applications indicates that decision making must take place right here right now at the network edge, thus dictating that a high percentage of the IoT created data should be stored and analyzed locally. However, the computing resources are constrained and the amount of local data is often very limited at edge nodes. To tackle these challenges, we propose a distributionally robust optimization (DRO)-based edge intelligence framework, which is based on an innovative synergy of cloud knowledge transfer and local learning. More specifically, the knowledge transfer from the cloud learning is in the form of a reference distribution and its associated uncertainty set. Further, based on its local data, the edge device constructs an uncertainty set centered around its empirical distribution. The edge learning problem is then cast as a DRO problem subject to the above two distribution uncertainty sets. Building on this framework, we investigate two problem formulations for DRO-based edge intelligence, where the uncertainty sets are constructed using the Kullback-Leibler divergence and the Wasserstein distance, respectively. Numerical results demonstrate the effectiveness of the proposed DRO-based framework.

Delay-Optimal Distributed Edge Computing in Wireless Edge Networks

Xiaowen Gong (Auburn University, USA)

2
Distributed edge computing (DEC) makes use of distributed devices in the edge network to perform computing in parallel. By integrating distributed computing with edge computing, DEC can reduce service delays compared to each of the two paradigms. In this paper, we explore DEC that exploits edge devices connected by a wireless network to perform distributed computing. In particular, we study the fundamental problem of minimizing the delay of executing a distributed algorithm. We first establish some structural properties of the optimal communication scheduling, which show that it is optimal to be non-preemptive, be non-idle, and schedule forward communications before backward communications. Then, based on these properties, we characterize the optimal computation allocation which can be found by an efficient algorithm. Next, based on the optimal computation allocation, we characterize the optimal scheduling order of communications for some cases, and develop an efficient algorithm with a finite approximation ratio to find it for the general case. Last, based on the optimal computation allocation and communication scheduling, we further show that the optimal selection of devices can be found efficiently for some cases. Our results provide useful insights into the optimal policies. We evaluate the performance of the theoretical findings using simulations.

Fog Integration with Optical Access Networks from an Energy Efficiency Perspective

Ahmed Helmy and Amiya Nayak (University of Ottawa, Canada)

0
Access networks have been continuously going through many reformations to make them better suited for many demanding applications and adapt to arising traffic trends. On one hand, incorporating fog and edge computing has become a necessity for alleviating network congestions and supporting numerous applications that can no longer rely on the resources of a remote cloud. On the other hand, energy-efficiency has become a strong imperative for telecommunication networks, which aims to reduce both their operational costs and carbon footprint, but often leads to some degradation in the network performance. In this paper, we address these two challenges by examining the integration of fog computing with optical access networks under power-conserving frameworks. As most power-conserving frameworks in the literature are centralized-based, we also propose a novel decentralized-based energy-efficient framework and compare its performance against its centralized counterpart in a long-reach passive optical network (LR-PON). We study the possible cloudlet placements and the offloading performance in each allocation paradigm to see which is able to meet the requirements of next-generation access networks by having better network performance and less energy consumption.

Session Chair

Marie-Jose Montpetit (MJMontpetit.com)

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.