Session A-3

Privacy

Conference
10:00 AM — 11:30 AM EDT
Local
May 4 Wed, 10:00 AM — 11:30 AM EDT

Otus: A Gaze Model-based Privacy Control Framework for Eye Tracking Applications

Miao Hu and Zhenxiao Luo (Sun Yat-Sen University, China); Yipeng Zhou (Macquarie University, Australia); Xuezheng Liu and Di Wu (Sun Yat-Sen University, China)

0
Eye tracking techniques have been widely adopted by a wide range of devices to enhance user experiences. However, eye gaze data is private in nature, which can reveal users' psychological and physiological features. Yet, most existing privacy-preserving solutions based on Differential Privacy (DP) mechanisms cannot well protect privacy for individual users without sacrificing user experience. In this paper, we are among the first to propose a novel gaze model-based privacy control framework called Otus for eye tracking applications, which incorporates local DP (LDP) mechanisms to preserve user privacy and improves user experience in the meanwhile. First, we conduct a measurement study on real traces to illustrate that direct noise injection on raw gaze trajectories can significantly lower the utility of gaze data. To preserve utility and privacy simultaneously, Otus injects noises in two steps: (1) Extracting model features from raw data to depict gaze trajectories on individual users; (2) Adding LDP noises into model features so as to protect privacy. By applying the tile view graph model in step (1), we illustrate the entire workflow of Otus and prove its privacy protection level. With extensive evaluation, Otus can effectively protect privacy for individual users without significantly compromising gaze data utility.

Privacy-Preserving Online Task Assignment in Spatial Crowdsourcing: A Graph-based Approach

Hengzhi Wang, En Wang and Yongjian Yang (Jilin University, China); Jie Wu (Temple University, USA); Falko Dressler (TU Berlin, Germany)

0
Recently, the growing popularity of Spatial Crowdsourcing (SC), allowing untrusted platforms to obtain a great quantity of information about workers and tasks' locations, has raised numerous privacy concerns. In this paper, we investigate the privacy-preserving task assignment in the online scenario, where workers and tasks arrive at the platform in real time and tasks should be assigned to workers immediately. Traditional online task assignments usually make a benchmark to decide the following task assignment. However, when location privacy is considered, the benchmark does not work anymore. Hence, how to assign tasks in real time based on workers and tasks' obfuscated locations is a challenging problem. Especially when many tasks could be assigned to one worker, path planning should be considered, making the assignment more challenging. To this end, we propose a Planar Laplace distribution based Privacy mechanism (PLP) to obfuscate real locations of workers and tasks, where the obfuscation does not change the ranking of these locations' relative distances. Furthermore, we design a Threshold-based Online task Assignment mechanism (TOA), which could deal with the one-worker-many-tasks assignment and achieve a satisfactory competitive ratio. Simulations based on two real-world datasets show that the proposed algorithm consistently outperforms the state-of-the-art approach.

Protect Privacy from Gradient Leakage Attack in Federated Learning

Junxiao Wang, Song Guo and Xin Xie (Hong Kong Polytechnic University, Hong Kong); Heng Qi (Dalian University of Technology, China)

1
Federated Learning (FL) is susceptible to gradient leakage attacks, as recent studies show the feasibility of obtaining private training data on clients from publicly shared gradients. Existing work solves this problem by incorporating a series of privacy protection mechanisms, such as homomorphic encryption and local differential privacy to prevent data leakage. However, these solutions either incur significant communication and computation costs, or significant training accuracy loss. In this paper, we show our observation that the sensitivity of gradient changes w.r.t. training data is the essential measure of information leakage risk. Based on this observation, we present a defense, its intuition is perturbing gradients to match information leakage risk such that the defense overhead is lightweight while privacy protection is adequate. Our another key observation is that global correlation of gradients could compensate for this perturbation. Based on such compensation, training can achieve guaranteed accuracy. We conduct experiments on MNIST, Fashion-MNIST and CIFAR-10 for defending against two gradient leakage attacks. Without sacrificing accuracy, the results demonstrate that our lightweight defense can decrease the PSNR, SSIM between the reconstructed images and raw images by up to more than 60% for both two attacks, compared with baseline defensive methods.

When Deep Learning Meets Steganography: Protecting Inference Privacy in the Dark

Qin Liu (Hunan University & Temple University, China); Jiamin Yang and Hongbo Jiang (Hunan University, China); Jie Wu (Temple University, USA); Tao Peng (Guangzhou University, China); Tian Wang (Beijing Normal University & UIC, China); Guojun Wang (Guangzhou University, China)

0
While cloud-based deep learning benefits for high-accuracy inference, it leads to potential privacy risks when exposing sensitive data to untrusted servers. In this paper, we work on exploring the feasibility of steganography in preserving inference privacy. Specifically, we devise GHOST and GHOST+, two private inference solutions employing steganography to make sensitive images invisible in the inference phase. Motivated by the fact that deep neural networks (DNNs) are inherently vulnerable to adversarial attacks, our main idea is turning this vulnerability into the weapon for data privacy, enabling the DNN to misclassify a stego image into the class of the sensitive image hidden in it. The main difference is that GHOST retrains the DNN into a poisoned network to learn the hidden features of sensitive images, but GHOST+ leverages a generative adversarial network (GAN) to produce adversarial perturbations without altering the DNN. For enhanced privacy and a better computation-communication trade-off, both solutions adopt the edge-cloud collaborative framework. Compared with the previous solutions, this is the first work that successfully integrates steganography and the nature of DNNs to achieve private inference while ensuring high-accuracy. Extensive experiments validate that steganography has excellent ability in accuracy-aware privacy protection of deep learning.

Session Chair

Yupeng Li (Hong Kong Baptist University)

Session A-6

Mobile Security

Conference
4:30 PM — 6:00 PM EDT
Local
May 4 Wed, 4:30 PM — 6:00 PM EDT

Big Brother is Listening: An Evaluation Framework on Ultrasonic Microphone Jammers

Yike Chen, Ming Gao, Yimin Li, Lingfeng Zhang, Li Lu and Feng Lin (Zhejiang University, China); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China); Kui Ren (Zhejiang University, China)

0
Covert eavesdropping via microphones has always been a major threat to user privacy. Benefiting from the acoustic non-linearity property, the ultrasonic microphone jammer (UMJ) is effective in resisting this long-standing attack. However, prior UMJ researches underestimate adversary's attacking capability in reality and miss critical metrics for a thorough evaluation. The strong assumptions of adversary unable to retrieve information under low word recognition rate, and adversary's weak denoising abilities in the threat model makes these works overlook the vulnerability of existing UMJs. As a result, their UMJs' resilience is overestimated. In this paper, we refine the adversary model and completely investigate potential eavesdropping threats. Correspondingly, we define a total of 12 metrics that are necessary for evaluating UMJs' resilience. Using these metrics, we propose a comprehensive framework to quantify UMJs' practical resilience. It fully covers three perspectives that prior works ignored in some degree, i.e., ambient information, semantic comprehension, and collaborative recognition. Guided by this framework, we can thoroughly and quantitatively evaluate the resilience of existing UMJs towards eavesdroppers. Our extensive assessment results reveal that most existing UMJs are vulnerable to sophisticated adverse approaches. We further outline the key factors influencing jammers' performance and present constructive suggestions for UMJs' future designs.

InertiEAR: Automatic and Device-independent IMU-based Eavesdropping on Smartphones

Ming Gao, Yajie Liu, Yike Chen, Yimin Li, Zhongjie Ba and Xian Xu (Zhejiang University, China); Jinsong Han (Zhejiang University & School of Cyber Science and Technology, China)

0
IMU-based eavesdropping has brought growing concerns over smartphone users' privacy. In such attacks, adversaries utilize IMUs that require zero permissions for access to acquire speeches. A common countermeasure is to limit sampling rates (within 200 Hz) to reduce overlap of vocal fundamental bands (85-255 Hz) and inertial measurements (0-100 Hz). Nevertheless, we experimentally observe that IMUs sampling below 200 Hz still record adequate speech-related information because of aliasing distortions. Accordingly, we propose a practical side-channel attack, InertiEAR, to break the defense of sampling rate restriction on the zero-permission eavesdropping. It leverages IMUs to eavesdrop on both top and bottom speakers in smartphones.
In the InertiEAR design, we exploit coherence between responses of the built-in accelerometer and gyroscope and their hardware diversity using a mathematical model. The coherence allows precise segmentation without manual assistance. We also mitigate the impact of hardware diversity and achieve better device-independent performance than existing approaches that have to massively increase training data from different smartphones for a scalable network model. These two advantages re-enable zero-permission attacks but also extend the attacking surface and endangering degree to off-the-shelf smartphones. InertiEAR achieves a recognition accuracy of 78.8% with a cross-device accuracy of up to 49.8% among 12 smartphones.

JADE: Data-Driven Automated Jammer Detection Framework for Operational Mobile Networks

Caner Kilinc (University of Edinburgh, Sweden); Mahesh K Marina (The University of Edinburgh, United Kingdom (Great Britain)); Muhammad Usama (Information Technology University (ITU), Punjab, Lahore, Pakistan); Salih Ergüt (Oredata, Turkey & Rumeli University, Turkey); Jon Crowcroft (University of Cambridge, United Kingdom (Great Britain)); Tugrul Gundogdu and Ilhan Akinci (Turkcell, Turkey)

1
Wireless jammer activity from malicious or malfunctioning devices cause significant disruption to mobile network services and user QoE degradation. In practice, detection of such activity is manually intensive and costly, taking days and weeks after the jammer activation to detect it. We present a novel data-driven jammer detection framework termed JADE that leverages continually collected operator-side cell-level KPIs to automate this process. As part of this framework, we develop two deep learning based semi-supervised anomaly detection methods tailored for the jammer detection use case. JADE features further innovations, including an adaptive thresholding mechanism and transfer learning based training to efficiently scale JADE for operation in real-world mobile networks. Using a real-world 4G RAN dataset from a multinational mobile network operator, we demonstrate the efficacy of proposed jammer detection methods vis-a-vis commonly used anomaly detection methods. We also demonstrate the robustness of our proposed methods in accurately detecting jammer activity across multiple frequency bands and diverse types of jammers. We present real-world validation results from applying our methods in the operator's network for online jammer detection. We also present promising results on pinpointing jammer locations when our methods spot jammer activity in the network along with cell site location data.

MDoC: Compromising WRSNs through Denial of Charge by Mobile Charger

Chi Lin, Pengfei Wang, Qiang Zhang, Hao Wang, Lei Wang and Guowei WU (Dalian University of Technology, China)

1
The discovery of wireless power transfer technology enables power transferred between transceivers in a wireless manner, thus generating the concept of wireless rechargeable sensor networks (WRSNs). Previous arts paid little attention to network security issues, making them prone to novel attacks. In this work, we focus on developing a denial of charge attack for WRSNs, which aims at corrupting network functionalities by manipulating the malicious mobile charger. We formalize the maximization of destructiveness problem (MAD) and propose a denial of charge attacking method, termed MDoC, with a performance guarantee to solve it. MDoC is composed of two attacking rounds, which first triggers sensors to send requests to create a request explosion phenomenon and then figures out the longest charging route to yield nodes starving to death as much as possible. Finally, extensive testbed experiments and simulations are conducted to verify the performance of MDoC. The results reveal that MDoC attack is able to exhaust at least 20% additional nodes without being noticed.

Session Chair

Chi Lin (Dalian University of Technology)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · © 2022 Duetone Corp.