Session C-7

Wireless Charging

Conference
8:30 AM — 10:00 AM EDT
Local
May 19 Fri, 8:30 AM — 10:00 AM EDT
Location
Babbio 202

Utilizing the Neglected Back Lobe for Mobile Charging

Meixuan Ren, Dié Wu and Jing Xue (Sichuan Normal University, China); Wenzheng Xu and Jian Peng (Sichuan University, China); Tang Liu (Sichuan Normal University, China)

0
Benefitting from the breakthrough of wireless power transfer technology, the lifetime of Wireless Sensor Networks (WSNs) can be significantly prolonged by scheduling a mobile charger (MC) to charge sensors. Compared with omnidirectional charging, the MC equipped with directional antenna can concentrate energy in the intended direction, making charging more efficient. However, all prior arts ignore the considerable energy leakage behind the directional antenna (i.e., back lobe), resulting in energy wasted in vain. To address this issue, we study a fundamental problem of how to utilize the neglected back lobe and schedule the directional MC efficiently. Towards this end, we first build and verify a directional charging model considering both main and back lobes. Then, we focus on jointly optimizing the number of dead sensors and energy usage effectiveness. We achieve these by introducing a scheduling scheme that utilizes both main and back lobes to charge multiple sensors simultaneously. Finally, extensive simulations and field experiments demonstrate that our scheme reduces the number of dead sensors by 49.5% and increases the energy usage effectiveness by 10.2% on average as compared with existing algorithms.
Speaker
Speaker biography is not available.

Concurrent Charging with Wave Interference

Yuzhuo Ma, Dié Wu and Meixuan Ren (Sichuan Normal University, China); Jian Peng (Sichuan University, China); Jilin Yang and Tang Liu (Sichuan Normal University, China)

0
To improve the charging performance, employing multiple wireless chargers to charge sensors concurrently is an effective way. In such charging scenarios, the radio waves radiated from multiple chargers will interfere with each other. Though a few work have realized the wave interference, they do not fully utilize the high power caused by constructive interference while avoiding the negative impacts brought by the destructive interference. In this paper, we aim to investigate the power distribution regularity of concurrent charging and take full advantage of the high power to enhance the charging efficiency. Specifically, we formulate a concurrent charGing utility mAxImizatioN (GAIN) problem and build a practical charging model with wave interference. Further, we propose a concurrent charging scheme, which not only can improve the power of interference enhanced regions by deploying chargers, but also find a set of points with the highest power to locate sensors. Finally, we conduct both simulations and field experiments to evaluate the proposed scheme. The results demonstrate that our scheme outperforms the comparison algorithms by 40.48% on average.
Speaker
Speaker biography is not available.

Roland: Robust In-band Parallel Communication for Magnetic MIMO Wireless Power Transfer System

Wangqiu Zhou, Hao Zhou, Xiang Cui and Xinyu Wang (University of Science and Technology of China, China); Xiaoyan Wang (Ibaraki University, Japan); Zhi Liu (The University of Electro-Communications, Japan)

0
In recent years, receiver (RX) feedback communication has attracted increasing attention to enhance the charging performance for magnetic resonant coupling (MRC) based wireless power transfer (WPT) systems. People prefer to adopt the in-band implementation with minimal overhead costs. However, the influence of RX-RX coupling couldn't be directly ignored like that in the RFID field, i.e., strong couplings and relay phenomenon. In order to solve these two critical issues, we propose a Robust layer-level in-band parallel communication protocol for MIMO MRC-WPT systems (called Roland). Technically, we first utilize the observed channel decomposability to construct group-level channel relationship graph for eliminating the interference caused by strong RX-RX couplings. Then, we generalize such method to deal with the RX dependency due to relay phenomenon. Finally, we conduct extensive experiments on a prototype testbed to evaluate the effectiveness of the proposed scheme. The results demonstrate that our Roland could provide ≥95% average decoding accuracy for concurrent feedback communication of 14 devices. Compared with the state-of-the-art solution, the proposed protocol Roland can achieve an average decoding accuracy improvement of 20.41%.
Speaker
Speaker biography is not available.

Charging Dynamic Sensors through Online Learning

Yu Sun, Chi Lin, Wei Yang, Jiankang Ren, Lei Wang, Guowei WU and Qiang Zhang (Dalian University of Technology, China)

0
As a novel solution for IoT applications, wireless rechargeable sensor networks (WRSNs) have achieved widespread deployment in recent years. Existing WRSN scheduling methods have focused extensively on maximizing the network charging utility in the fixed node case. However, when sensor nodes are deployed in dynamic environments (e.g., maritime environments) where sensors move randomly over time, existing approaches are likely to incur significant performance loss or even fail to execute normally. In this work, we focus on serving dynamic nodes whose locations vary randomly and formalize the dynamic WRSN charging utility maximization problem (termed MATA problem). Through discretizing candidate charging locations and modeling the dynamic charging process, we propose a near-optimal algorithm for maximizing charging utility. Moreover, we point out the long-short-term conflict of dynamic sensors that their location distributions in the short-term usually deviate from the long-term expectations. To tackle this issue, we further design an online learning algorithm based on the combinatorial multi-armed bandit (CMAB) model. It iteratively adjusts the charging strategy and adapts well to nodes' short-term location deviations. Extensive experiments and simulations demonstrate that the proposed scheme can effectively charge dynamic sensors and achieve a higher charging utility compared to baseline algorithms in both long-term and short-term.
Speaker
Speaker biography is not available.

Enter Zoom
Session C-8

Network Applications

Conference
10:30 AM — 12:00 PM EDT
Local
May 19 Fri, 10:30 AM — 12:00 PM EDT
Location
Babbio 202

Latency-First Smart Contract: Overclock the Blockchain for a while

Huayi Qi, Minghui Xu and Xiuzhen Cheng (Shandong University, China); Weifeng Lv (Beijing University of Aeronautics and Astronautics, China)

0
The blockchain system has a limited throughput to proceed with transactions, and sometimes gets overwhelmed by a great number of transactions. Latency-sensitive users have to bid against each other and pay more fees to make sure their transactions are processed in priority. However, the blockchain system does not keep busy all the time. In most of the time (76% in Ethereum), there is a lot of calculation power that gets wasted, during which fewer users are sending transactions. To rebalance the loads and reduce the latency for users, we propose the latency-first smart contract model that allows users to submit a commitment during the heavy-load time and then finish the rest work during their spare time. From the chain's view, the blockchain is "overclocked" shortly and then pays back. We propose a programming tool for our model and our experiment results show that applying our model reduces the latency time greatly in a heavy load.
Speaker Huayi Qi (Shandong University)

Huayi Qi received his bachelor's degree in computer science from Shandong University in 2020. He is working toward a Ph.D. degree in the School of Computer Science and Technology, Shandong University, China. His research interests include blockchain privacy and security.


On Design and Performance of Offline Finding Network

Tong Li (Renmin University of China, China); Jiaxin Liang (Huawei Technologies, China); Yukuan Ding (Hong Kong University of Science and Technology, Hong Kong); Kai Zheng (Huawei Technologies, China); Xu Zhang (Nanjing University, China); Ke Xu (Tsinghua University, China)

0
Recently, such industrial pioneers as Apple and Samsung have offered a new generation of offline finding network (OFN) that enables crowd search for missing devices without leaking private data. Specifically, OFN leverages nearby online finder devices to conduct neighbor discovery via Bluetooth Low Energy (BLE), so as to detect the presence of offline missing devices and report an encrypted location back to the owner via the Internet. The user
experience in OFN is closely related to the success ratio (possibility) of finding the lost device, where the latency of the prerequisite stage, i.e., neighbor discovery, matters. However, the crowd-sourced finder devices show diversity in scan modes due to different power modes or different manufacturers, resulting in local optima of neighbor discovery performance. In this paper, we present a brand-new broadcast mode called ElastiCast to deal with the scan mode diversity issues. ElastiCast captures the key features of BLE neighbor discovery and globally optimizes the broadcast mode interacting with diverse scan modes. Experimental evaluation results and commercial product deployment experience demonstrate that ElastiCast is effective in achieving stable and bounded neighbor discovery latency within the power budget.
Speaker
Speaker biography is not available.

WiseCam: Wisely Tuning Wireless Pan-Tilt Cameras for Cost-Effective Moving Object Tracking

Jinlong E (Renmin University of China, China); Lin He and Zhenhua Li (Tsinghua University, China); Yunhao Liu (Tsinghua University & The Hong Kong University of Science and Technology, China)

0
With the desirable functionality of moving object tracking, wireless pan-tilt cameras are playing critical roles in a growing diversity of surveillance environments. However, today's pan-tilt cameras oftentimes underperform when tracking frequently moving objects like humans -- they are prone to lose sight of objects and bring about excessive mechanical rotations that are especially detrimental to those energy-constrained outdoor scenarios. The ineffectiveness and high cost of state-of-the-art tracking approaches are rooted in their adherence to the industry's simplicity principle, which leads to their stateless nature, performing gimbal rotations based only on the latest object detection. To address the issues, this paper presents WiseCam that wisely tunes the pan-tilt cameras to minimize mechanical rotation costs while maintaining long-term object tracking with low overhead. It is achieved by object trajectory construction in a panoramic space and online rotating angle determination based on spatio-temporal motion information, together with adaptively adjusted rotation generation and execution. We implement WiseCam on two types of pan-tilt cameras with different motors. Real-world evaluations demonstrate that WiseCam significantly outperforms the state-of-the-art tracking approaches on both tracking duration and power consumption.
Speaker
Speaker biography is not available.

Effectively Learning Moiré QR Code Decryption from Simulated Data

Yu Lu, Hao Pan, Guangtao Xue and Yi-Chao Chen (Shanghai Jiao Tong University, China); Jinghai He (University of California, Berkeley, China); Jiadi Yu (Shanghai Jiao Tong University, China); Feitong Tan (Simon Fraser University, Canada)

0
Moiré QR Code is a secure encrypted QR code system that can protect the user's QR code displayed on the screen from being accessed by attackers. However, conventional decryption methods based on image processing techniques suffer from intensive computation and significant decryption latency in practical mobile applications. In this work, we propose a deep learning-based Moiré QR code decryption framework and achieve an excellent decryption performance. Considering the sensitivity of the Moiré phenomenon, collecting training data in the real world is extremely labor and material intensive. To overcome this issue, we develop a physical screen-imaging Moire simulation methodology to generate a synthetic dataset which covers the entire Moiré-visible area. Extensive experiments show that the proposed decryption network can achieve a low decryption latency (0.02 seconds) and a high decryption rate (98.8%), compared with the previous decryption method with decryption latency (5.4 seconds) and decryption rate (98.6%).
Speaker
Speaker biography is not available.

Session Chair

Qinghua Li

Enter Zoom
Session C-9

Crowdsourcing

Conference
1:30 PM — 3:00 PM EDT
Local
May 19 Fri, 1:30 PM — 3:00 PM EDT
Location
Babbio 202

Multi-Objective Order Dispatch for Urban Crowd Sensing with For-Hire Vehicles

Jiahui Sun, Haiming Jin, Rong Ding and Guiyun Fan (Shanghai Jiao Tong University, China); Yifei Wei (Carnegie Mellon University, USA); Lu Su (Purdue University, USA)

0
For-hire vehicle-enabled crowd sensing (FVCS) has become a promising paradigm to conduct urban sensing tasks in recent years. FVCS platforms aim to jointly optimize both the order-serving revenue as well as sensing coverage and quality. However, such two objectives are often conflicting and need to be balanced according to the platforms' preferences on both objectives. To address this problem, we propose a novel cooperative multi-objective multi-agent reinforcement learning framework, referred to as MOVDN, to serve as the first preference-configurable order dispatch mechanism for FVCS platforms. Specifically, MOVDN adopts a decomposed network structure, which enables agents to make distributed order selection decisions, and meanwhile aligns each agent's local decision with the global objectives of the FVCS platform. Then, we propose a novel algorithm to train a single universal MOVDN that is optimized over the space of all preferences. This allows our trained model to produce the optimal policy for any preference. Furthermore, we provide the theoretical convergence guarantee and sample efficiency analysis of our algorithm. Extensive experiments on three real-world ride-hailing order datasets demonstrate that MOVDN outperforms strong baselines and can support the platform in decision-making effectively.
Speaker Haiming Jin (Shanghai Jiao Tong University)



AoI-aware Incentive Mechanism for Mobile Crowdsensing using Stackelberg Game

Mingjun Xiao, Yin Xu and Jinrui Zhou (University of Science and Technology of China, China); Jie Wu (Temple University, USA); Sheng Zhang (Nanjing University, China); Jun Zheng (University of Science and Technology of China, China)

0
Mobile CrowdSensing (MCS) is a mobile computing paradigm, through which a platform can coordinate a crowd of workers to accomplish large-scale data collection tasks using their mobile devices. Information freshness has attracted much focus on MCS research worldwide. In this paper, we investigate the incentive mechanism design in MCS that take the freshness of collected data and social benefits into concerns. First, we introduce the Age of Information (AoI) metric to measure the data freshness. Then, we model the incentive mechanism design with AoI guarantees as a novel incomplete information two-stage Stackelberg game with multiple constraints. Next, we derive the optimal strategies of this game to determine the optimal reward paid by the platform and the optimal data update frequency for each worker. Moreover, we prove that these optimal strategies form a unique Stackelberg equilibrium. Based on the optimal strategies, we propose an AoI-Aware Incentive (AIAI) mechanism, whereby the platform and workers can maximize their utilities simultaneously. Meanwhile, the system can ensure that the AoI values of all data uploaded to the platform are not larger than a given threshold to achieve high data freshness. Extensive simulations on real-world traces are conducted to demonstrate the significant performance of AIAI.
Speaker
Speaker biography is not available.

Spatiotemporal Transformer for Data Inference and Long Prediction in Sparse Mobile CrowdSensing

En Wang, Weiting Liu and Wenbin Liu (Jilin University, China); Chaocan Xiang (Chongqing University, China); Bo Yang and Yongjian Yang (Jilin University, China)

0
Mobile CrowdSensing (MCS) is a data sensing model that recruits users carrying mobile terminals to collect data. As its variant, Sparse MCS has become a practical paradigm for large-scale and fine-grained urban sensing with the advantage of collecting only a few data to infer the full map. However, in many real-world scenarios, such as early prevention of epidemic, users are interested not only in inferring the entire sensing map, but also in long-term prediction the sensing map, where the latter is more important. Long-term prediction not only reduces sensing cost, but also identifies trends or other characteristics of the data. In this paper, we propose a novel model of Spatiotemporal Transformer (ST-transformer) to infer and predict the data with the sparse sensed data based on the spatiotemporal relationships. We design a spatiotemporal feature embedding to embed the prior spatiotemporal information of sensing map into the model to guide model learning. Moreover, we also design a multi-head spatiotemporal attention mechanism to dynamically capture spatiotemporal relationships among data. Extensive experiments have been conducted on three types of typical urban sensing tasks, which verify the effectiveness of our proposed algorithms in improving the inference and long-term prediction accuracy with the sparse sensed data.
Speaker
Speaker biography is not available.

Crowd2: Multi-agent Bandit-based Dispatch for Video Analytics upon Crowdsourcing

Yu Chen, Sheng Zhang, Yuting Yan, Yibo Jin, Ning Chen and Mingtao Ji (Nanjing University, China); Mingjun Xiao (University of Science and Technology of China, China)

0
Many crowdsourcing platforms are emerging, leveraging the resources of recruited workers to execute various outsourcing tasks, mainly for those computing-intensive video analytics with high quality requirements. Although the profit of each platform is strongly related to the quality of analytics feedback, due to the uncertainty on diverse performance of workers and the conflicts of interest over platforms, it is non-trivial to determine the dispatch of tasks with maximum benefits. In this paper, we design a decentralized mechanism for a Crowd of Crowdsourcing platforms, denoted as Crowd2, optimizing the worker selection to maximize the social welfare of these platforms in a long-term scope, under the consideration of both proportional fairness and dynamic flexibility. Concretely, we propose a video analytics dispatch algorithm based on multi-agent bandit, for which the more accurate profit estimates are attained via the decoupling of multi-knapsack based mapping problem. Via rigorous proofs, a sub-linear regret bound for social welfare of crowdsourcing profits is achieved while both fairness and flexibility are ensured. Extensive trace-driven experiments demonstrate that Crowd2 improves the social welfare by 36.8%, compared with other alternatives.
Speaker
Speaker biography is not available.

Session Chair

Qinghua Li

Enter Zoom
Session C-10

Distributed Learning

Conference
3:30 PM — 5:00 PM EDT
Local
May 19 Fri, 3:30 PM — 5:00 PM EDT
Location
Babbio 202

Accelerating Distributed K-FAC with Efficient Collective Communication and Scheduling

Lin Zhang (Hong Kong University of Science and Technology, Hong Kong); Shaohuai Shi (Harbin Institute of Technology, Shenzhen, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong)

0
Kronecker-factored approximate curvature (K-FAC) has been shown to achieve faster convergence than SGD in training deep neural networks. However, existing distributed K-FAC (D-KFAC) relies on the all-reduce collective for communications and scheduling, which incurs excessive communications in each iteration. In this work, we propose a new form of D-KFAC with a reduce-based alternative to eliminate redundant communications. This poses new challenges and opportunities in that the reduce collective requires a root worker to collect the results, which considerably complicates the communication scheduling. To this end, we formulate an optimization problem that determines tensor fusion and tensor placement simultaneously aiming to minimize the training iteration time. We develop novel communication scheduling strategies and propose a placement-aware D-KFAC (PAD-KFAC) training algorithm, which further improves communication efficiency. Our experimental results on a 64-GPU cluster interconnected with 10Gb/s and 100Gb/s Ethernet show that our PAD-KFAC can achieve an average of 27% and 17% improvement over state-of-the-art D-KFAC methods, respectively.
Speaker
Speaker biography is not available.

PipeMoE: Accelerating Mixture-of-Experts through Adaptive Pipelining

Shaohuai Shi (Harbin Institute of Technology, Shenzhen, China); Xinglin Pan and Xiaowen Chu (Hong Kong Baptist University, Hong Kong); Bo Li (Hong Kong University of Science and Technology, Hong Kong)

0
Large models have attracted much attention in the AI area. The sparsely activated mixture-of-experts (MoE) technique pushes the model size to a trillion-level with a sub-linear increase of computations as an MoE layer can be equipped with many separate experts, but only one or two experts need to be trained for each input data. However, the feature of dynamically activating experts of MoE introduces extensive communications in distributed training. In this work, we propose PipeMoE to adaptively pipeline the communications and computations in MoE to maximally hide the communication time. Specifically, we first identify the root reason why a higher pipeline degree does not always achieve better performance in training MoE models. Then we formulate an optimization problem that aims to minimize the training iteration time. To solve this problem, we build performance models for computation and communication tasks in MoE and develop an optimal solution to determine the pipeline degree such that the iteration time is minimal. We conduct extensive experiments with 174 typical MoE layers and two real-world NLP models on a 64-GPU cluster. Experimental results show that our PipeMoE almost always chooses the best pipeline degree and outperforms state-of-the-art MoE training systems by 5%-77% in training time.
Speaker
Speaker biography is not available.

DIAMOND: Taming Sample and Communication Complexities in Decentralized Bilevel Optimization

Peiwen Qiu, Yining Li and Zhuqing Liu (The Ohio State University, USA); Prashant Khanduri (University of Minnesota, USA); Jia Liu and Ness B. Shroff (The Ohio State University, USA); Elizabeth Serena Bentley (AFRL, USA); Kurt Turck (United States Air Force Research Labs, USA)

0
Decentralized bilevel optimization has received increasing attention recently due to its foundational role in many emerging multi-agent learning paradigms (e.g., multi-agent meta-learning and multi-agent reinforcement learning) over peer-to-peer edge networks. However, to work with the limited computation and communication capabilities of edge networks, a major challenge in developing decentralized bilevel optimization techniques is to lower sample and communication complexities. This motivates us to develop a new decentralized bilevel optimization called DIAMOND (decentralized single-timescale stochastic approximation with momentum and gradient-tracking). The contributions of this paper are as follows: i) our DIAMOND algorithm adopts a single-loop structure rather than following the natural double-loop structure of bilevel optimization, which offers low computation and implementation complexity; ii) compared to existing approaches, the DIAMOND algorithm does not require any full gradient evaluations, which further reduces both sample and computational complexities; iii) through a careful integration of momentum information and gradient tracking techniques, we show that the DIAMOND algorithm enjoys O(ε^(-3/2)) in sample and communication complexities for achieving an ε-stationary solution, both of which are independent of the dataset sizes and significantly outperform existing works. Extensive experiments also verify our theoretical findings.
Speaker
Speaker biography is not available.

DAGC: Data-aware Adaptive Gradient Compression

Rongwei Lu (Tsinghua University, China); Jiajun Song (Dalian University of Technology, China); Bin Chen (Harbin Institute of Technology, Shenzhen, China); Laizhong Cui (Shenzhen University, China); Zhi Wang (Tsinghua University, China)

0
Gradient compression algorithms are widely used to alleviate the communication bottleneck in distributed ML. However, existing gradient compression algorithms suffer from accuracy degradation in Non-IID scenarios, because a uniform compression scheme is used to compress gradients at workers with different data distributions and volumes, since workers with larger volumes of data are forced to adapt to the same aggressive compression ratios as others. Assigning different compression ratios to workers with different data distributions and volumes is thus a promising solution.
In this study, we first derive a function from capturing the correlation between the number of training iterations for a model to converge to the same accuracy, and the compression ratios at different workers; This function particularly shows that workers with larger data volumes should be assigned with higher compression ratios to guarantee better accuracy. Then, we formulate the assignment of compression ratios to the workers as an n-variables chi-square nonlinear optimization problem under fixed and limited total communication constrain. We propose an adaptive gradients compression strategy called DAGC, which assigns each worker a different compression ratio according to their data volumes. Our experiments confirm that DAGC can achieve better performance facing highly imbalanced data volume distribution and restricted communication.
Speaker
Speaker biography is not available.

Session Chair

Yanjiao Chen

Enter Zoom

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.