Session 1-C

Privacy I

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 7 Tue, 2:00 PM — 3:30 PM EDT

(How Much) Does a Private WAN Improve Cloud Performance?

Todd W Arnold, Ege Gurmericliler and Georgia Essig (Columbia University, USA); Arpit Gupta (Columbia University); Matt Calder (Microsoft); Vasileios Giotsas (Lancaster University, United Kingdom (Great Britain)); Ethan Katz-Bassett (Columbia University, USA)

4
The buildout of private Wide Area Networks (WANs) by cloud providers allows providers to extend their network to more locations and establish direct connectivity with end user Internet Service Providers (ISPs). Tenants of the cloud providers benefit from this proximity to users, which is supposed to provide improved performance by bypassing the public Internet. However, the performance impact of private WANs is not widely understood. To isolate the impact of a private WAN, we measure from globally distributed vantage points to a large cloud provider, comparing performance when using its worldwide WAN and when forcing traffic to instead use the public Internet. The benefits are not universal. While 40% of our vantage points saw improved performance when using the WAN, half of our vantage points did not see significant performance improvement, and 10% had better performance over the public Internet. We find that the benefits of the private WAN tend to improve with client-to-server distance, but that the benefits (or drawbacks) to a particular vantage point depend on specifics of its geographic and network connectivity.

De-anonymization of Social Networks: the Power of Collectiveness

Jiapeng Zhang and Luoyi Fu (Shanghai Jiao Tong University, China); Xinbing Wang (Shanghai Jiaotong University, China); Songwu Lu (University of California at Los Angeles, USA)

1
The interaction among users in social networks raises concern on user privacy, as it facilitates the assailants to identify users by matching the anonymized networks with a correlated sanitized one. Prior arts regarding such de-anonymization problem are divided into a seeded case or a seedless one, depending on whether there are pre-identified nodes. The seedless case is more complicated since the adjacency matrix delivers limited structural information.

To address this issue, we, for the first time, integrate the multi-hop relationships, which exhibit more structural commonness between networks, into the seedless de-anonymization. We aim to leverage these multi-hop relations to minimize the total disagreements of multi-hop adjacency matrices, which we call collective adjacency disagreements (CADs), between two networks. Theoretically, we demonstrate that CAD enlarges the difference between wrongly and correctly matched node pairs, whereby two networks can be correctly matched w.h.p. even when the network density is below log(n). Algorithmically, we adopt the conditional gradient descending method on a collective-form objective, which efficiently finds the minimal CADs for networks with broad degree distributions. Experiments return desirable accuracies thanks to the rich information manifested by collectiveness since most nodes can be correctly matched when merely utilizing adjacency relations fails to work.

Towards Correlated Queries on Trading of Private Web Browsing History

Hui Cai (Shanghai Jiao Tong University, China); Fan Ye and Yuanyuan Yang (Stony Brook University, USA); Yanmin Zhu (Shanghai Jiao Tong University, China); Jie Li (Shanghai Jiaotong University, China)

2
With the commoditization of private data, data trading in consideration of user privacy protection has become a fascinating research topic. The trading for private web browsing histories brings huge economic value to data consumers when leveraged by targeted advertising. In this paper, we study the trading of multiple correlated queries on private web browsing history data. We propose \emph{TERBE}, which is a novel trading framework for correlaTed quEries based on pRivate web Browsing historiEs. \emph{TERBE} first devises a modified matrix mechanism to perturb query answers. It then quantifies privacy loss under the relaxation of classical differential privacy and a newly devised mechanism with relaxed matrix sensitivity, and further compensates data owners for their diverse privacy losses in a satisfying manner. Through real-data based experiments, our analysis and evaluation results demonstrate that \emph{TERBE} balances total error and privacy preferences well within acceptable running time, and also achieves all desired economic properties of budget balance, individual rationality, and truthfulness.

Towards Pattern-aware Privacy-preserving Real-time Data Collection

Zhibo Wang, Wenxin Liu and Xiaoyi Pang (Wuhan University, China); Ju Ren (Central South University, China); Zhe Liu (Nanjing University of Aeronautics and Astronautics, China & SnT, University of Luxembourg, Luxembourg); Yongle Chen (Taiyuan University of Technology, China)

6
Although time-series data collected from users can be utilized to provide services for various applications, they could reveal sensitive information about users. Recently, local differential privacy (LDP) has emerged as the state-of-art approach to protect data privacy by perturbing data locally before outsourcing. However, existing works based on LDP perturb each data point separately without considering the correlations between consecutive data points in time-series. Thus, the important patterns of each time-series might be distorted by existing LDP-based approaches, leading to severe degradation of data utility.

In this paper, we propose a novel pattern-aware privacy-preserving approach, called PatternLDP, to protect data privacy while the pattern of time-series can still be preserved. To this end, instead of providing the same level of privacy protection at each data point, each user only samples remarkable points in time-series and adaptively perturbs them according to their impacts on local patterns. In particular, we propose a pattern-aware sampling method to determine whether to sample and perturb current data point, and propose an importance-aware randomization mechanism to adaptively perturb sampled data locally while achieving better trade-off between privacy and utility. Extensive experiments on real-world datasets demonstrate that PatternLDP outperforms existing mechanisms.

Session Chair

Vasanta Chaganti (Swarthmore College)

Session 2-C

Security I

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 7 Tue, 4:00 PM — 5:30 PM EDT

MagView: A Distributed Magnetic Covert Channel via Video Encoding and Decoding

Juchuan Zhang, Xiaoyu Ji and Wenyuan Xu (Zhejiang University, China); Yi-Chao Chen (Shanghai Jiao Tong University, China); Yuting Tang (University of California, Los Angeles, USA); Gang Qu (University of Maryland, USA)

2
Air-gapped networks achieve security by using the physical isolation to keep the computers and network from the Internet. However, magnetic covert channels based on CPU utilization have been proposed to help secret data to escape the Faraday-cage and the air-gap. Despite the success of such cover channels, they suffer from the high risk of being detected by the transmitter computer and the challenge of installing malware into such a computer. In this paper, we propose MagView, a distributed magnetic cover channel, where sensitive information is embedded in other data such as video and can be transmitted over the air-gapped internal network. When any computer uses the data such as playing the video, the sensitive information will leak through the magnetic covert channel. The "separation" of information embedding and leaking, combined with the fact that the covert channel can be created on any computer, overcomes these limitations. We demonstrate that CPU utilization for video decoding can be effectively controlled by changing the video frame type and reducing the quantization parameter without video quality degradation. We prototype MagView and achieve up to 8.9 bps throughput with BER as low as 0.0057. Experiments under different environment show the robustness of MagView.

Stealthy DGoS Attack: DeGrading of Service under the Watch of Network Tomography

Cho-Chun Chiu (The Pennsylvania State University, USA); Ting He (Penn State University, USA)

3
Network tomography is a powerful tool to monitor the internal state of a closed network that cannot be measured directly, with broad applications in the Internet, overlay networks, and all-optical networks. However, existing network tomography solutions all assume that the measurements are trust-worthy, leaving open how effective they are in an adversarial environment with possibly manipulated measurements. To understand the fundamental limit of network tomography in such a setting, we formulate and analyze a novel type of attack that aims at maximally degrading the performance of targeted paths without being localized by network tomography. By analyzing properties of the optimal attack, we formulate novel combinatorial optimizations to design the optimal attack strategy, which are then linked to well-known problems and approximation algorithms. Our evaluations on real topologies demonstrate the large damage of such attacks, signaling the need of new defenses.

Voiceprint Mimicry Attack Towards Speaker Verification System in Smart Home

Lei Zhang, Yan Meng, Jiahao Yu, Chong Xiang, Brandon Falk and Haojin Zhu (Shanghai Jiao Tong University, China)

1
The prosperity of voice controllable systems (VCSes) has dramatically changed our daily lifestyle and facilitated the smart home's deployment. Currently, most of VCSes exploit automatic speaker verification (ASV) to prevent VCSes from various voice attacks (e.g., replay attack). In this study, we present VMask, a novel and practical voiceprint mimicry attack that could fool ASV in smart home and inject the malicious voice command as a legitimate user. The key observation behind VMask is that the deep learning models utilized by ASV are vulnerable to the subtle perturbations in the input space. VMask leverages the idea of adversarial examples to generate subtle perturbation. Then, by adding it to existing speech samples collected from an arbitrary speaker, the crafted speech samples still sound like the former speaker for human but would be verified as the targeted victim by ASV. Moreover, psychoacoustic masking is employed to manipulate the adversarial perturbation under human perception threshold, thus making victim unaware of ongoing attacks. We validate the effectiveness of VMask by perform comprehensive experiments on both grey box (VGGVox) and black box (Microsoft Azure Speaker Verification API) ASVs. Additionally, a real-world case study on Apple HomeKit proves the VMask's practicability on smart home platforms.

Your Privilege Gives Your Privacy Away: An Analysis of a Home Security Camera Service

Jinyang Li and Zhenyu Li (Institute of Computing Technology, Chinese Academy of Sciences, China); Gareth Tyson (Queen Mary, University of London, United Kingdom (Great Britain)); Gaogang Xie (Institute of Computing Technology, Chinese Academy of Sciences, China)

1
Once considered a luxury, Home Security Cameras (HSCs) are now commonplace and constitute a growing part of the wider online video ecosystem. This paper argues that their expanding coverage and close integration with daily life may result in not only unique behavioral patterns, but also key privacy concerns. This motivates us to perform a detailed measurement study of a major HSC provider (360 Home Security), covering 15.4M streams and 211K users. Our study takes two perspectives: 1. we explore the per-user behaviour of 360 Home Security, identifying core clusters of users; and 2. we build on this analysis to extract and predict privacy-compromising insight. Key observations include a highly asymmetrical traffic distribution, distinct usage patterns, wasted resource and fixed viewing locations. Furthermore, we identify three privacy risks via formal methodologies and explore them in detail. We find that paid users are more likely to be exposed to attacks due to their heavier usage patterns. We conclude by proposing simple mitigations that can alleviate these risks.

Session Chair

Qiben Yan (Michigan State University)

Made with in Toronto · Privacy Policy · © 2020 Duetone Corp.