Workshops

The 10th International Workshop on Security and Privacy in Big Data (BigSecurity 2022)

Session BigSecurity-S2

Big Data Security 2

Conference
12:30 AM — 2:00 PM EDT
Local
May 1 Sun, 9:30 PM — 11:00 AM PDT

On the detection of adversarial attacks through reliable AI

Ivan Vaccari (National Research Council, Italy); Alberto Carlevaro (University of Genoa & CNR-IEIIT, Italy); Sara Narteni (Consiglio Nazionale delle Ricerche, Italy); Enrico Cambiaso (National Research Council, CNR-IEIIT, Italy); Maurizio Mongelli (National Research Council of Italy, Italy)

0
Adversarial machine learning manipulates datasets to mislead machine learning algorithm decisions. We propose a new approach able to detect adversarial attacks, based on eXplainable and Reliable AI. The results obtained show how canonical algorithms may have difficulty in identifying attacks, while the proposed approach is able to correctly identify different adversarial settings.

Overlapped Connected Dominating Set for Big Data Security

Li Yi (Shanghai Institute of Microsystem and Information Technology Chinese Academy of Sciences, China); Weidong Fang (Key Laboratory of Wireless Sensor Network & Communication, SIMIT, CAS., China); Wei Chen (China University of Mining and Technology(CUMT), China); Wuxiong Zhang (Shanghai Institute of Micro-system and Information Technology); Guoqing Jia (Qinghai University for Nationalities, China)

0
Transmission and aggregation of big data form important foundations for model training of artificial intelligence and machine learning. However, the transmission of big data faces many security threats. In distributed networks, network entities carrying data may be physically compromised, leading to internal attacks. In order to deal with the threat of internal attacks, we propose the overlapped connected dominating set (OCDS) for multipath transmission. The OCDS is constructed by distributed algorithms for distributed implementation. In this method, information is transmitted and compared through different paths to prevent compromised nodes from tampering. Simulation results show that this method can effectively defend against internal attacks including black hole attacks and selective forwarding attacks.

SCSGuard: Deep Scam Detection for Ethereum Smart Contracts

Huiwen Hu, Qianlan Bai and Yuedong Xu (Fudan University, China)

0
Smart contract is the building block of blockchain systems that enables automated peer-to-peer transactions and decentralized services. With the increasing popularity of smart contracts, blockchain systems, in particular Ethereum, have been the "paradise" of versatile fraud activities. In this work, we present SCSGuard, a novel deep learning scam detection framework that harnesses the automatically extractable bytecodes of smart contracts as their new features. We design a GRU network with attention mechanism learning from the N-gram bytecode patterns to determine whether a smart contract is fraudulent or not. Our framework is advantageous over the baseline algorithms in three aspects. Firstly, SCSGuard provides a unified solution to different scam genres, thus relieving the need for code analysis skills. Secondly, the inference of SCSGuard is faster than the code analysis by several orders of magnitudes. Thirdly, experimental results manifest that SCSGuard achieves high accuracy (0.92~0.94), precision (0.94~0.96) and recall (0.97~0.98) for both Ponzi and Honeypot scams under similar settings, and is potentially useful to detect new Phishing smart contracts.

When Wireless Federated Learning Meets Physical Layer Security: The Fundamental Limits

Honan Zhang (Southwest Jiaotong University, China); Chuanchuan Yang (State Key Laboratory of Advanced Optical Communication Systems and Networks, Peking University, China); Bin Dai (Southwest Jiaotong University, China)

0
It has been shown that when federated learning (FL) is implemented in wireless communication systems, the noise of the channel directly serves as a privacy-inducing mechanism, which indicates that un-coded transmission can achieve privacy ``for free''. However, due to the broadcast nature of wireless communication, the wireless FL is susceptible to eavesdropping, and different from the privacy requirement of the FL (to preserve the accuracy of data analysis, information leakage is allowed in privacy mechanisms), the physical layer security (PLS) requirement is that the information leakage caused by the eavesdropper should vanish, which indicates that the previous uncoded transmission strategy is not available for the wireless FL in the presence of PLS. In this paper, we propose a novel architecture satisfying both the privacy requirement of the FL and the PLS requirement of the wireless channels, and characterize the corresponding capacity-equivocation region under privacy-utility constraint. The study of this paper is further explained via numerical examples and simulation results.

WMDefense: Using Watermark to Defense Byzantine Attacks in Federated Learning

Xu Zheng, Qihao Dong and Anmin Fu (Nanjing University of Science and Technology, China)

1
Federated learning enables data owners to train a global ML model without exchanging data. However, the unique pattern of training in federated learning can be exploited by malicious adversaries. These malicious adversaries degrade the accuracy of the federated training model by sending malicious inputs during the federated training process. Existing Byzantine robust federated learning algorithms remain vulnerable to customized local model poisoning attacks because they are not designed with a suitable malicious client detection mechanism. To defend against the latest Byzantine attacks, this work proposes an effective algorithm, WMDefense, which identifies malicious clients by embedding a watermark to the global model and tracking the degree of watermark recession after local model training. Our experiments apply WMDefense to two recent Byzantine attack algorithms and validate them using two publicly available datasets, showing that it can defend well against both attacks. Furthermore, we compare WMDefense with two current state-of-the-art Byzantine robustness federated learning algorithms and show our superior performance.

Session Chair

Qiang Ye (Memorial University of Newfoundland, Canada)

Session BigSecurity-S1

Big Data Security 1

Conference
10:30 AM — 12:00 PM EDT
Local
May 2 Mon, 7:30 AM — 9:00 AM PDT

Automatic Selection Attacks Framework for Hard Label Black-Box Models

Xiaolei Liu and Xiaoyu Li (University of Electronic Science and Technology of China, China); Desheng Zheng (Southwest Petroleum University, China); Jiayu Bai and Yu Peng (University of Electronic Science and Technology of China, China); Shibin Zhang (Chengdu University of Information Technology, China)

0
The current adversarial attacks against machine learning models can be divided into white-box attacks and black-box attacks. Further the black-box can be subdivided into soft label and hard label black-box, but the latter has the deficiency of only returning the class with the highest prediction probability, which leads to the difficulty in gradient estimation. However, due to its wide application, it is of great research significance and application value to explore hard label black-box attacks. This paper proposes an automatic framework, which can be explained in two aspects based on the existing attack methods. Firstly, ASAF applies model equivalence to select substitute models automatically so as to generate adversarial examples and then completes black-box attacks based on their transferability. Secondly, specified feature selection and parallel attack methods are proposed to shorten the attack time and improve the attack success rate. The experimental results show that ASAF can achieve more than 90% success rate of nontargeted attack on the common models of traditional data set ResNet101 (CIFAR10) and InceptionV4 (ImageNet). Meanwhile, compared with FGSM and other attack algorithms, the attack time is reduced by at least 89.7% and 87.8% respectively in three traditional datasets. Besides, it can achieve 90% success rate of attack on the online model, BaiduAI digital recognition. In conclusion, ASAF is the first automatic selection attacks framework for hard label black-boxes, in which model equivalence, specified feature selection as well as parallel attack methods speed up creating automatic attacks.

Interpretability Evaluation of Botnet Detection Model based on Graph Neural Network

Xiaolin Zhu and Yong Zhang (Beijing University of Posts and Telecommunications, China); Zhao Zhang (Beijing University of Posts and Telecommunication, China); Guo Da, Qi Li and Zhao Li (Beijing University of Posts and Telecommunications, China)

0
Due to the conspicuous ability to capture topology characteristics, graph neural networks (GNN) have been widely used in botnet detection and proven efficient. However, the black-box nature of GNN models creates an obstacle for users to trust these classified instruments. In addition to high accuracy, stakeholders also hope that these models are consistent with human cognition. To cope with this problem, we propose a method to evaluate the trustworthiness of GNN-based botnet detection models, called BD-GNNExplainer. Concretely, BD-GNNExplainer extracts the data that contribute the most to GNN's decision by reducing the loss between the classification results generated by the selected subgraph as the GNN model's input and the results generated by the entire graph as input. We calculate the relevance between the model-relied data and the informative data to quantify a score expressing interpretability. For different-structure GNN models, these scores will tell us which one is more trustworthy and ultimately become an essential basis for model optimization. To the best of our knowledge, our work is the first time to discuss the interpretability of botnet detection systems and will provide a guideline for making the botnet detection methodology more understandable.

A Generative Model for Evasion Attacks in Smart Grid

Venkata Praveen Kumar Madhavarapu (Missouri University of Science and Technology, USA); Shameek Bhattacharjee (Western Michigan University, USA); Sajal K. Das (Missouri University of Science and Technology, USA)

0
Adversarial machine learning (AML) studies how to fool a machine learning (ML) model with malicious inputs degrade the ML method's performance. Within AML, evasion attacks are an attack category that involves manipulation of input data during the testing phase to induce a misclassification of the data input by the ML model. Such manipulated data inputs that are called, adversarial examples. In this paper, we propose a generative approach for crafting evasion attacks against three ML learning based security classifiers. The proof of concept application for the ML based security classifier is the classification of compromised smart meters launching false data injection. Our proposed solution is validated with a real smart metering dataset. We found degradation in compromised meter detection performance under our proposed generative evasion attack.

IoT Botnet Detection framework from Network Behavior based on Extreme Learning Machine

Nasimul Hasan, Zhenxiang Chen, Chuan Zhao, Yuhui Zhu and Cong Liu (University of Jinan, China)

1
IoT devices have been affected by fundamental security flaws in recent years, rendering them exposed to various threats and viruses, particularly IoT botnets. In contrast to conventional malware on desktop computers and Android, heterogeneous processor architecture constraints on IoT devices pose various challenges to researchers. Traditional methodologies are challenging to apply because of the IoT's unique properties, such as resource-constrained devices, enormous volumes of data, and the requirement of real-time detection. This paper explores the heterogeneity of bot over different IoT devices and stages of deployment. Then it proposes a lightweight framework to detect IoT botnet and botnet families. The framework operates with bot behavior data over a simple yet effective learning based method named Extreme Learning Machine. For IoT botnet detection, the experimental results demonstrate that the suggested technique achieves accuracy, precision, and recall of 97.7%, 97.1%, and 97.1%, respectively. The detection performance of botnet families is inspiring. Furthermore, a comparison of our framework to other current approaches reveals that it produces better results, particularly in terms of the training time, which gives it a considerable edge over other learning-based methods.

Learning Based and Physical-layer Assisted Secure Computation Offloading in Vehicular Spectrum Sharing Networks

Ying Ju and Yuchao Chen (Xidian University, China); Haoyu Wang (University of California Irvine, USA); Lei Liu, Qingqi Pei and Zhiwei Cao (Xidian University, China); Neeraj Kumar (Thapar University Patiala, India)

0
Massive computing tasks have been generated with the widespread applications of big data analysis in vehicular edge computing (VEC) networks. However, the offloading process of the VEC networks suffers a threat of information leakage. The physical layer security (PLS) technology is an effective security solution to protect confidential information. Furthermore, the contradiction between massive data transmission and limited communication resources promotes an urgent need for a proper scheme to improve resources utilization. In this paper, we design a joint secure offloading and resource allocation (SoRA) scheme based on PLS technology and spectrum sharing architecture. We aim at minimizing the system processing delay of all vehicular users (VUs) while ensuring the security of information, by jointly optimizing the spectrum access, transmit power and computing resource allocation. Then we adopt a multi-agent deep reinforcement learning algorithm to solve the optimization problem. With proper training, we demonstrate that the VU agents can successfully cooperate to improve the system processing delay and ensure the security of the offloading process.

Session Chair

Yuedong Xu (Fudan University, P.R. China)

Session BigSecurity-S3

Big Data Privacy 1

Conference
3:30 PM — 5:00 PM EDT
Local
May 2 Mon, 12:30 PM — 2:00 PM PDT

FedTSE: Low-Cost Federated Learning for Privacy-Preserved Traffic State Estimation in IoV

Xiaoming Yuan and Jiahui Chen (Northeastern University, China); Ning Zhang (University of Windsor, Canada); Chunsheng Zhu (Southern University of Science and Technology, China); Qiang Ye (Memorial University of Newfoundland, Canada); Sherman Shen (University of Waterloo, Canada)

0
Traffic state estimation (TSE) is an important aspect of Internet of Vehicles (IoV) for road path planning and better driving experience. In IoV, with the support of edge intelligence, real-time traffic data can be processed by edge computing (EC) servers for informed decision making. However, collecting trajectory information from vehicles in a centralized manner may increase transmission delay and cause driver privacy leakage problems. In this paper, we firstly propose a federated learning (FL) framework for TSE, named FedTSE, with privacy preservation by jointly considering TSE accuracy, model computation, and transmission cost. Then, a TSE model is designed based on the long short-term memory (LSTM) as the local training model for joint prediction of vehicular speed and traffic flow. Considering the resource limitation of computation/communication, we further propose a deep reinforcement learning (DRL)-based algorithm for model parameter uploading/downloading decisions to improve the estimation accuracy of local models and balance the tradeoff between computation and communication cost. Simulation results show the proposed FedTSE achieves a lower cost and higher prediction training accuracy in TSE.

FedCluster: A Federated Learning Framework for Cross-Device Private ECG Classification

Daoqin Lin, Yuchun Guo, Huan Sun and Yishuai Chen (Beijing Jiaotong University, China)

0
Federated learning technique enables private distributed model to be trained by sharing model parameters instead of raw data among data holders but fails to perform well for private electrocardiogram(ECG) detection. In fact, the ECG data
of different clients is extremely non-independent and identically distributed(Non-IID) as a client has only 1 or 2 classes of the total 5 classes of heartbeats. Subset data sharing was proposed to compensate for Non-IID data in image tasks, but the original federated learning algorithm FedAVG with such shared data still performs bad, especially for the rare type, as the parameters
from different clients to server are treated equally. Based on our observation that clients can be clustered in terms of their ECG data, we propose a novel parameter update framework, named FedCluster, to improve the federated model to diagnose rare types of clients. To make FedCluster work, we cluster the parameters submitted by different clients to the server to transfer knowledge between similar clients, especially those of rare types, and we propose a hierarchical method for obtaining global shared data suitable for unbalanced ECG data in extreme distribution. Considering the practicality of the proposed method, we only select modified-Lead II of MIT-BIH database to verify our method. The experimental results show that, compared with original FedAVG, 1) our method improves the overall classification accuracy by 6.14%; 2) the recognition accuracy has been improved by 89.38%, using client with high skewness data distribution such as client 232.

Incentive and Knowledge Distillation Based Federated Learning for Cross-Silo Applications

Beibei Li, Yaxin Shi and Yuqing Guo (Sichuan University, China); Qinglei Kong (The Chinese University of Hong Kong, Shenzhen, China); Yukun Jiang (Sichuan University, China)

0
Big data are usually characterized by heterogeneity in real-world cross-silo applications, such as healthcare, finance, and smart cities, leaving federated learning a big challenge. Further, many existing federated learning schemes fail to fully consider the diverse willingness and contributions of data providers in participation. In this paper, to address these challenges, we are motivated to propose an incentive and knowledge distillation based federated learning scheme for cross-silo applications. Specifically, we first develop a new federated learning framework, to support cooperative learning among diverse heterogeneous client models. Second, we devise an incentive mechanism, which not only stimulates workers to provide more high-quality data, but also improves clients' enthusiasm for participating in federated learning. Third, a novel knowledge distillation algorithm is designed to deal with data heterogeneity. Extensive experiments on MNIST/FEMNIST and CIFAR10/100 datasets with both IID and Non-IID settings, demonstrate the high effectiveness of the proposed scheme, compared with state-of-the-art studies.

Novel Efficient Block Chain and Rule-based Intelligent Privacy Share System in Future Network

Xiaolong Deng (Beijing University of Post and Telecommunications, China); Tiejun Lv (Beijing University of Posts and Telecommunications, China); LinMing Song (Beijing University of Post and Telecommunications, China)

0
With the rapid development of Internet and mobile intelligent devices, the collection and release of personal privacy information on the Internet and other public areas are getting more and more convenient. As a result, personal privacy leakage incidents occur frequently. At the same time, with the global commercial deployment of 5G mobile communication technologies, the faster network environment including Internet-of-Things in the future can provide support for richer application scenarios. But it also would faces more severe security challenges in the 5G environment for privacy leakage when massive intelligent devices will collect and transmit a large amount of personal information over Internet with high speed. In this paper, we proposed a highly efficient new privacy sharing system basing on the first national standards of individual privacy of China with the name of Personal Information Protection Guide of Information Security Technology in Public and Commercial Information Service System and the GDPR data protection standard of European Union. The new privacy sharing system was constructed on block chain and rule engine technology to adapt to the future 5G and even faster networks. In view of the core contents related to personal privacy information protection standards of China and GDPR, four key roles (personal information subject, personal information manager, personal information receiver, and independent assessment agency) and five related key core domains are specifically and abstractly modeled in this paper. Furthermore, basing on the abstract model, the proposed highly efficient privacy sharing system has been implemented to real software with block chain and hierarchical privacy protection. And to the best knowledge of us, our software is on the leading edge by the provided experimental results and it can make the sharing of privacy information become a block chain based system with traceability and decentralization.

Hierarchical Attention Network for Interpretable and Fine-Grained Vulnerability Detection

Mianxue Gu (Hainan University, China); Hantao Feng (XIDIAN University & National Computer Network Intrusion Protection Center, China); Hongyu Sun (Xidian University, China); Peng Liu (Pennsylvania State University, USA); Qiuling Yue (Hainan University, China); Jinglu Hu (Waseda University, Japan); Chunjie Cao (Hainan University, China); Zhang Yuqing (University of Chinese Academy of Sciences, China)

0
With the rapid development of software technology, the number of vulnerabilities is proliferating, which makes vulnerability detection an important topic of security research. Existing works only focus on predicting whether a given program code is vulnerable but less interpretable. To overcome these deficiencies, we first apply the hierarchical attention network into vulnerability detection for interpretable and fine-grained vulnerability discovery. Especially, our model consists of two level attention layers at both the line-level and the token-level of the code to locate which lines or tokens are important to discover vulnerabilities. Furthermore, in order to accurately extract features from source code, we process the code based on the abstract syntax tree and embed the syntax tokens into vectors. We evaluate the performance of our model on two widely used benchmark datasets, CWE-119 (Buffer Error) and CWE-399 (Resource Management Error) from SARD. Experiments show that the F1 score of our model achieves 86.1% (CWE-119) and 90.0% (CWE-399) on two datasets, which is significantly better than the-state-of-the-art models. In particular, our model can directly mark the importance of different lines and different tokens, which can provide useful information for further vulnerability exploitation and repair.

Session Chair

Ning Zhang (University of Windsor, Canada)

Session BigSecurity-S4

Big Data Privacy 2

Conference
5:30 PM — 7:00 PM EDT
Local
May 2 Mon, 2:30 PM — 4:00 PM PDT

DRL-based Optimization of Privacy Protection and Computation Performance in MEC Computation Offloading

Zhengjun Gao and Guowen Wu (Donghua University, China); Yizhou Shen (Cardiff University, United Kingdom (Great Britain)); Hong Zhang (Donghua University, China); Shigen Shen (Shaoxing University, China); Qiying Cao (Donghua University, China)

0
The emergence of mobile edge computing (MEC) imposes an unprecedented pressure on privacy protection, although it helps the improvement of computation performance including energy consumption and computation delay by computation offloading. To this end, we propose a deep reinforcement learning (DRL)-based computation offloading scheme to optimize jointly privacy protection and computation performance. The privacy exposure risk caused by offloading history is investigated, and an analysis metric is defined to evaluate the privacy level. To find the optimal offloading strategy, an algorithm combining actor-critic, off-policy, and maximum entropy is proposed to accelerate the learning rate. Simulation results show that the proposed scheme has better performance compared with other benchmarks.

PRESSGenDB: PRivacy-prEserving Substring Search on Encrypted Genomic DataBase

Sara Jafarbeiki and Amin Sakzad (Monash University, Australia); Shabnam Kasra Kermanshahi (RMIT University, Australia); Ron Steinfeld (Monash University, Australia); Raj Gaire (Data61|CSIRO, Australia)

1
Efficient sequencing methods produce a large amount of genetic data, and make it accessible to researchers. This leads genomics to be considered a legitimate big data field. Hence, outsourcing data to the cloud is necessary as the genomic dataset is large. Data owners encrypt sensitive data before outsourcing to maintain data confidentiality and outsourcing aids data owners in resolving the issue of local storage management. Because genomic data is so enormous, safely and effectively performing researchers' queries is challenging. In this paper, we propose a method, PRESSGenDB, for securely performing string and substring searches on the encrypted genomic sequences dataset. We do so by leveraging searchable symmetric encryption (SSE) and designing a new method to handle these queries. In comparison to the state-of-the-art methods, PRESSGenDB supports various types of queries over genomic sequences such as string search and substring searches with and without a requested start position given. Moreover, it supports strings of alphabets as sequences rather than just 0, 1. It can search for substrings (patterns) over a whole dataset of genomic sequences rather than just one sequence. Furthermore, by comparing PRESSGenDB's search complexity analytically with the state-of-the-art, we show that it outperforms the recent efficient works.

Efficient and Privacy-Preserving Logistic Regression Scheme based on Leveled Fully Homomorphic Encryption

Xin Zhao, Chengjin Liu, Zoe Jiang and Qian Chen (Harbin Institute of Technology, Shenzhen, China); Junbin Fang (Jinan University, China); Daojing He (East China Normal University, China); Jun Zhang (Shenzhen University, China); Xuan Wang (Harbin Institute of Technology Shenzhen Graduate School, China)

0
In the era of big data, data are often outsourced at cloud for storage and computation. As data has become a highly valuable resource, data holder needs retain full privacy and control over it. Privacy-preserving machine learning (PPML) aims at extracting data value while preserving its privacy. Homomorphic encryption (HE), as a privacy-preserving technique, is increasingly used in PPML schemes. However, since bootstrapping is required in Fully Homomorphic Encryption (FHE) after a certain number of homomorphic operations to ensure the correctness of decryption, FHE-based PPML may perform a large number of bootstrappings, which greatly reduces the efficiency. Besides, FHE only supports homomorphic addition and multiplication operations. Most of the existing solutions use Taylor theorem to convert nonlinear function into linear polynomial function with sacrifice of model accuracy. To solve the two problems above, we propose to simulate bootstrapping operation in training phase by a pair of decryption and re-encryption operations, which is further transferred to trusted hardware to avoid information leakage after decryption. With this idea, the performance can be enhanced greatly. In addition, all the calculations of activation function (nonlinear) can be executed in plaintext form directly. In this paper, we propose and implement an efficient and privacy-preserving logistic regression scheme based on Leveled FHE, and deploy the bootstrapping simulation and activation function on Raspberry Pi (a simulated trusted hardware). The scheme achieves practical usability demonstrated on standard UCI datasets.

Machine and Deep Learning Approaches for IoT Attack Classification

Alfredo Nascita, Francesco Cerasuolo, Davide Di Monda, Jonas Thern Aberia Garcia, Antonio Montieri and Antonio Pescapé (University of Napoli Federico II, Italy)

0
In recent years, Internet of Things (IoT) traffic has increased dramatically and is expected to grow further in the next future. Because of their vulnerabilities, IoT devices are often the target of cyber-attacks with dramatic consequences. For this reason, there is a strong need for powerful tools to guarantee a good level of security in IoT networks. Machine and deep learning approaches promise good performance for such a complex task. In this work, we employ state-of-art traffic classifiers based on deep learning and assess their effectiveness in accomplishing IoT attack classification. We aim to recognize different attack classes and distinguish them from benign network traffic. In more detail, we utilize effective and unbiased input data that allow fast (i.e. "early") detection of anomalies and we compare performance with that of traditional (i.e. "post-mortem") machine learning classifiers. The experimental results highlight the need for advanced deep learning architectures fed with input data specifically tailored and designed for IoT attack classification. Furthermore, we perform an occlusion analysis to assess the influence on the performance of some network layer fields and the possible bias they may introduce.

Phishing Attack Detection with ML-Based Siamese Empowered ORB Logo Recognition and IP Mapper

Manish Bhurtel, Yuba Siwakoti and Danda B. Rawat (Howard University, USA)

0
Visual cues are the most convincing entities which are weaponized by cyber-attackers to carry out phishing attacks. Generally, a genuine-looking UI is sufficient to convince a user. Thinking that it is a legitimate website, users provide their private information to attackers. This is a very serious problem that needs to be solved right before the users enter their private information. Recent advances in phishing attack detection include the use of Machine Learning (ML) algorithms. In this paper, we propose a hybrid of ML-based and IP-based phishing attack detection mechanisms. We employ Oriented FAST and Rotated BRIEF (ORB) keypoints extractor to capture the brand name, and homography-based transformation to localize the logo. The object recognition and localization is followed by the Siamese network which verifies the logo localized by ORB. We then map the IP address of the incoming webpage with the IP pool of the organization associated with the logo. We focus our study on 4 highly phished organizational logos viz. Bank of America, Dropbox, Google, and PayPal. With 100 test samples, our system provides an accuracy of 90\% for ORB, and accuracy, precision, recall, and f1-score of93.68%, 95.56%, 97.73%, and 96.63% respectively for the Siamese network.

Session Chair

Youyang Qu (Deakin University, Australia)

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · © 2022 Duetone Corp.