Workshops

Session DeepWireless-OS

Opening Session

Conference
8:00 AM — 8:10 AM EDT
Local
May 20 Sat, 5:00 AM — 5:10 AM PDT

Session Chair

Junqing Zhang (University of Liverpool)

Session DeepWireless-KS1

Keynote Session 1

Conference
8:10 AM — 9:10 AM EDT
Local
May 20 Sat, 5:10 AM — 6:10 AM PDT

RF Fingerprinting: Challenges and Experiences in Real-world Applications

Kaushik Chowdhury (Northeastern University, USA)

0
Network densification is poised to enable the massive throughout jump expected in the era of 5G and beyond. In the first part of the talk, we identify the challenges of verifying identify of a particular emitter in a large pool of similar devices based on unique distortions in the signal, or ‘RF fingerprints’, as it passes through a given transmitter chain. We show how deep convolutional neural networks can uniquely identify a radio in a large signal dataset composed of over a hundred WiFi radios with accuracy close to 99%. For this, we use tools from machine learning, namely, data augmentation, attention networks and deep architectures that have proven to be successful in image processing and modify these methods to work in the RF-domain. In the second part of the talk, we show how intentional injection of distortions and carefully crafted FIR filters applied to the transmitter-side can help in enhanced classification. Finally, we discuss how to detect new devices not previously seen during training using observed statistical patterns. We conclude by showing a glimpse of other applications of RF fingerprinting, like 5G waveform detection in large-scale experimental platforms and identifying a specific UAV in a swarm.
Speaker Kaushik Chowdhury (Northeastern University, USA)

Kaushik Chowdhury is Professor in the Electrical and Computer Engineering Department at Northeastern University, Boston and the Associate Director of the Institute for the Wireless Internet of Things. He is the winner of the U.S. Presidential Early Career Award for Scientists and Engineers (PECASE) in 2017, the Defense Advanced Research Projects Agency Young Faculty Award in 2017, the Office of Naval Research Director of Research Early Career Award in 2016, and the National Science Foundation (NSF) CAREER award in 2015. He is the recipient of multiple best paper awards at conferences including IEEE GLOBECOM, DySPAN, INFOCOM, ICC, and ICNC. He co-directs the operations of Colosseum RF/network emulator and the NSF Platforms for Advanced Wireless Research project office. His research interests are primarily in the design of wireless systems, with use cases related to applied machine learning for wireless, programmable cellular architectures, digital twins, large-scale experimentation, and networked robotics.


Session Chair

Junqing Zhang (University of Liverpool)

Session DeepWireless-TS1

Technical Session 1 - Deep Learning for Wireless Security

Conference
9:10 AM — 10:30 AM EDT
Local
May 20 Sat, 6:10 AM — 7:30 AM PDT

Keep It Simple: CNN Model Complexity Studies for Interference Classification Tasks

Taiwo Oyedare (Virginia Polytechnic Institute and State University, USA); Vijay K. Shah (George Mason University, USA); Daniel Jakubisin and Jeffrey Reed (Virginia Tech, USA)

0
The growing number of devices using the wireless spectrum makes it important to find ways to minimize interference and optimize the use of the spectrum. Deep learning models, such as convolutional neural networks (CNNs), have been widely utilized to identify, classify, or mitigate interference due to their ability to learn from the data directly. However, there have been limited research on the complexity of such deep learning models despite their widespread use for classifying interference. The major focus of deep learning-based wireless classification literature has been on improving classification accuracy, often at the expense of model complexity. This may not be practical for many wireless devices, such as, internet of things (IoT) devices, which usually have very limited computational resources and cannot handle very complex models. Thus, it becomes important to account for model complexity when designing deep learning based models for interference classification. To address this, we conduct an analysis of CNN based wireless classification that explores the trade-off amongst dataset size, CNN model complexity, and classification accuracy under various levels of classification difficulty: namely, interference classification, heterogeneous transmitter classification, and homogeneous transmitter classification. Our study, based on three wireless datasets, shows that a simpler CNN model with fewer parameters can perform just as well as a more complex model, providing important insights into the use of CNNs in computationally constrained applications.
Speaker Taiwo Oyedare (Virginia Tech)

Taiwo Oyedare received the bachelor’s degree in electrical and electronics engineering from Ekiti State University, Ado Ekiti, Nigeria, in 2012, and the master’s degree in computer and information systems engineering from Tennessee State University, in 2016. He is currently pursuing the Ph.D. degree with the Department of Electrical and Computer Engineering, Virginia Tech. His research interests include wireless security and the application of deep learning to wireless communications.


A GAF and CNN based Wi-Fi Network Intrusion Detection System

Rayed Suhail Ahmad (Purdue University Northwest, USA); Asmer Hamid Ali and Syed Mehdi Kazim (Aligarh Muslim University, India); Quamar Niyaz (Purdue University Northwest, USA)

0
Wi-Fi networks have become ubiquitous nowadays in enterprise and home networks creating opportunities for attackers to target them. These attackers exploit various vulnerabilities in Wi-Fi networks to gain unauthorized access to networks or extract data from end users' devices. A network intrusion detection system (NIDS) is deployed to detect these attacks before they can cause any significant damages to the network's functionalities or security. In this work, we propose a deep learning based NIDS using 2D convolution neural network (CNN) to detect intrusions inside a Wi-Fi network. Wi-Fi frames are transformed into images using Gramian Angular Field (GAF) technique. These images are then fed to the proposed deep learning based NIDS for intrusion detection. We used a benchmark Wi-Fi intrusion datasets, AWID3, for our model development. Our proposed model is able to achieve an accuracy and f-measure of 99.77% and 99.66%, respectively.
Speaker Rayed Suhail Ahmad

Rayed Suhail Ahmad is a master's student at Purdue University Northwest. He completed his BS from Aligarh Muslim University, India. Before joining PNW as a graduate student, he worked as a software developer for two years. His research interests include machine learning and cybersecurity.


Federated Radio Frequency Fingerprinting with Model Transfer and Adaptation

Chuanting Zhang (Shandong University, China); Shuping Dang (University of Bristol, United Kingdom (Great Britain)); Junqing Zhang (University of Liverpool, United Kingdom (Great Britain)); Haixia Zhang (Shandong University, China); Mark Beach (University of Bristol, United Kingdom (Great Britain))

0
The Radio frequency (RF) fingerprinting technique makes highly secure device authentication possible for future networks by exploiting hardware imperfections introduced during manufacturing. Although this technique has received considerable attention over the past few years, RF fingerprinting still faces great challenges of channel-variation-induced data distribution drifts between the training phase and the test phase. To address this fundamental challenge and support model training and testing at the edge, we propose a federated RF fingerprinting algorithm with a novel strategy called model transfer and adaptation (MTA). The proposed algorithm introduces dense connectivity among convolutional layers into RF fingerprinting to enhance learning accuracy and reduce model complexity. Besides, we implement the proposed algorithm in the context of federated learning, making our algorithm communication efficient and privacy-preserved. To further conquer the data mismatch challenge, we transfer the learned model from one channel condition and adapt it to other channel conditions with only a limited amount of information, leading to highly accurate predictions under environmental drifts.
Experimental results on real-world datasets demonstrate that the proposed algorithm is model-agnostic and also signal-irrelevant. Compared with state-of-the-art RF fingerprinting algorithms, our algorithm can improve prediction performance considerably with a performance gain of up to 15%.
Speaker Chuanting Zhang (University of Bristol)

Chuanting Zhang is a tenure-track associate professor at the School of Software, Shandong University. Before this position, he was a Senior Research Associate at the University of Bristol and worked with Prof. Mark A. Beach. Besides, I also was a Postdoctoral Research Fellow at King Abdullah University of Science and Technology (KAUST), working with Prof. Mohamed-Slim Alouini and Prof. Basem Shihada. He received his Ph.D. degree in communication and information systems from Shandong University, Jinan, China, in 2019, under the supervision of Prof. Minggao Zhang and Prof. Haixia Zhang.


A Noise-Robust Radio Frequency Fingerprint Identification Scheme for Internet of Things Devices

Yuexiu Xing (Nanjing University of Posts and Telecommunications, China); Xiaoxing Chen (Jiang Su Yi Tong High-tech Company Limited, China); Junqing Zhang (University of Liverpool, United Kingdom (Great Britain)); Hu Aiqun (Southeast University, China); Dengyin Zhang (Nanjing University of Posts and Telecommunications, China)

0
Radio frequency fingerprint (RFF) identification is a potentially effective technique to address the authentication security of Internet of Things (IoT) devices. Since the complex working environment and limited resources of IoT devices, noise is non-negligible in RFF identification of IoT devices. It is a challenge to suppress the noise without damaging the RFF information. In this paper, we propose a robust RFF identification scheme, which consists of a frequency point selection (FPS) based
denoising algorithm, and a convolutional neural network (CNN) classifier. The FPS algorithm performs denoising by filtering out all the frequency components that are independent of the RFF. The CNN is designed with a dynamically decreasing learning rate to accelerate learning and obtain optimal identification performance. Experiments were conducted with 54 ZigBee devices to evaluate the performance of the proposed scheme under three different RFF identification scenarios. The results show that the FPS algorithm brings the highest accuracy improvement of about 25% when the training signal-to-noise ratio (SNR) is hybrid and the test SNR is 0 dB.
Speaker Yuexiu Xing(Nanjing University of Posts and Telecommunications)

Yuexiu XING received the M. Eng. in Electronics and Communication Engineering and the Ph.D. degree in Information and Communication Engineering from Southeast University, Nanjing, China, in 2016 and 2021, respectively. She is currently a Lecturer at Nanjing University of Posts and Telecommunications. Her current research interests include the Internet of Things, physical layer security, wireless security, and radio frequency fingerprinting identification.


Session Chair

Junqing Zhang (University of Liverpool)

Session DeepWireless-TS2

Technical Session 2 - Deep Learning for Wireless Applications

Conference
11:00 AM — 11:40 AM EDT
Local
May 20 Sat, 8:00 AM — 8:40 AM PDT

WiWm-EP: Wi-Fi CSI-based Wheat Moisture Detection Using Equivalent Permittivity

Pengming Hu and Weidong Yang (Henan University of Technology, China); Xuyu Wang (Florida International University, USA); Shiwen Mao (Auburn University, USA); Erbo Shen (Henan University of Technology & Kaifeng University, China)

0
Moisture content is one of the important indexes of food storage security. The existing detection methods are time-consuming and high cost such that it is difficult to realize on-line moisture detection. In this paper, according to the dielectric properties of wheat, we propose a wheat moisture content detection system with commercial Wi-Fi devices, termed WiWm-EP. First, we introduce the relationship between the moisture content of wheat and its dielectric constant. Then, we establish an equivalent permittivity (EP) model to characterize wheat moisture content, where the EP can be calculated from channel state information (CSI) of the dual antenna model. Besides, we build the fitting function between the EP and the moisture content as the wheat moisture detection model. Finally, we evaluate the performance of the system through different experiments. The average relative error of the detection results of five wheat samples with different moisture contents is less than 3%.
Speaker Xuyu Wang (Florida International University)

Dr. Xuyu Wang is an Assistant Professor in the Knight Foundation School of Computing and Information Sciences at Florida International University. Before joining FIU, he was an Assistant Professor in the Department of Computer Science at California State University, Sacramento. He received his Ph.D. from the Department of Electrical and Computer Engineering at Auburn University in 2018. His research interests include wireless sensing, Internet of Things, wireless localization, smart health, wireless networks, trustworthy AI, and quantum machine learning. He received the NSF CRII Award in 2021. He was a co-recipient of the 2022 Best Journal Paper Award of IEEE ComSoc eHealth Technical Committee, the IEEE INFOCOM 2022 Best Demo Award, the IEEE ICC 2022 Best Paper Award, the IEEE Vehicular Technology Society 2020 Jack Neubauer Memorial Award, the IEEE GLOBECOM 2019 Best Paper Award, the IEEE ComSoc MMTC Best Journal Paper Award in 2018, the IEEE PIMRC 2017 Best Student Paper Award, the IEEE SECON 2017 Best Demo Award, and the Second Prize of the Natural Scientific Award of the Ministry of Education, China, in 2013. 


ADAPTER: A DRL-based Approach to Tune Routing in WSNs

Chao Sun, Jianxin Liao, Jiangong Zheng, Xiaotong Guo, Tongyu Song, Jing Ren and Ping Zhang (University of Electronic Science and Technology of China, China); Yongjie Nie (Yunnan Power Grid Co., Ltd, China); Siyang Liu (Yunnan Power Grid Co. Ltd, China)

0
As an essential part of the Internet of Things, Wireless Sensor Networks (WSNs) require high energy efficiency and Quality of Service (QoS). Most routing algorithms used in WSNs suffer from two significant limitations: the inability to accommodate multiple optimization objectives and to quickly adjust to dynamically changing environments. In order to address these limitations, we propose an adaptive selection approach of routing algorithm based on Deep Reinforcement Learning, called ADAPTER. This approach allows the system to intelligently select among various routing algorithms in response to changes in the system state, resulting in improved dual optimization of energy efficiency and QoS metrics. Our experiments demonstrate that ADAPTER outperforms a single routing algorithm in terms of network lifetime, average end-to-end delay, and packet loss rate.
Speaker Chao Sun (University of Electronic Science and Technology of China)

I am currently a graduate student at the University of Electronic Science and Technology of China, and my research direction is the application of deep reinforcement learning in the network.


Session Chair

Xuyu Wang (Florida International University)

Session DeepWireless-KS2

Keynote Session 2

Conference
2:00 PM — 3:00 PM EDT
Local
May 20 Sat, 11:00 AM — 12:00 PM PDT

Understand Me If You Can: Reasoning Foundations of Semantic Communication Networks

Walid Saad (Virginia Tech, USA)

0
For decades, the wireless link between transmitter and receiver has been seen as a mere bit pipe whose goal is to faithfully reconstruct the exact transmitted signal at the receiver, without paying attention to the meaning or effect of
the source message. This classical design may excel in delivering high communication rates and low bit-level errors, but its limitations become apparent when faced with the challenge of transmitting massive data streams for connected intelligence, Internet
of Senses, or holographic applications, where the message intent and effectiveness must be considered, and extremely stringent requirements for reliability and latency must be met, often simultaneously. In this regard, the concept of semantic communication,
in which the meaning of the source messages are incorporated in the design of a communication link, has recently emerged as a promising solution. However, despite a recent surge of efforts in this area, remarkably, the research landscape is still limited to
basic constructs in which even the very definition of "semantics" remains ambiguous. In this talk, we opine that major breakthroughs in semantic communications can only be made by equipping the communication nodes with the capability to exploit information
semantics at a fundamental level (from the data structure and relationships) which enables them to build a knowledge base, reason on their data, and engage in a form of communication using a machine language, similar to human conversation, with the capability
to deduce meaning from the data in a manner akin to human reasoning. Towards this goal, we introduce a holistic vision for semantic communications that is firmly grounded in rigorous artificial intelligence (AI) and causal reasoning foundations, with the potential
to revolutionize the way information is modeled, transmitted, and processed in communication systems. We show how, by embracing semantic communication through our proposed vision, we can usher in a new era of knowledge-driven, reasoning wireless networks that
are more sustainable and resilient than today's data-driven, knowledge-agnostic networks. We also shed light on how this framework can create AI-native networks - a key requirement of future wireless systems. As a first step towards enabling this paradigm
shift, we present our recent key results in this area, with foundations in AI and game theory, that showcase how the use of semantic communications can reduce the volume of data circulating in a network while improving reliability, two critical requirements
for emerging wireless services, such as connected intelligence and the metaverse. We conclude with a discussion on future opportunities in this exciting area.
Speaker Walid Saad (Virginia Tech, USA)

Walid Saad (S’07, M’10, SM’15, F’19) received his Ph.D degree from the University of Oslo, Norway in 2010. He is currently a Professor at Virginia Tech's Electrical and Computer Engineering Department where he leads the Network sciEnce, Wireless, and Security (NEWS) group. He is also the Next-G Wireless Faculty Lead at Virginia Tech's Innovation Campus. His research interests include wireless networks (5G/6G/beyond), machine learning, game theory, security, UAVs, semantic communications, cyber-physical systems, and network science. Dr. Saad is a Fellow of the IEEE. He is also the recipient of the NSF CAREER award in 2013 and the Young Investigator Award from the Office of Naval Research (ONR) in 2015. He was the (co-)author of eleven conference best paper awards at IEEE WiOpt in 2009, ICIMP in 2010, IEEE WCNC in 2012, IEEE PIMRC in 2015, IEEE SmartGridComm in 2015, EuCNC in 2017, IEEE GLOBECOM (2018 and 2020), IFIP NTMS in 2019, IEEE ICC (2020 and 2022). His research was recognized with the prestigious IEEE Fred W. Ellersick Prize from the IEEE Communications Society (ComSoc) in 2015 and 2022. He was also a co-author of the papers that received the 2019 and 2021 IEEE Communications Society Young Author Best Paper award. Other recognitions include the 2017 IEEE ComSoc Best Young Professional in Academia award, the 2018 IEEE ComSoc Radio Communications Committee Early Achievement Award, and the 2019 IEEE ComSoc Communication Theory Technical Committee Early Achievement Award. He has been annually listed in the Clarivate Web of Science Highly Cited Researcher List since 2019. Dr. Saad is currently the Editor-in-Chief for the IEEE Transactions on Machine Learning in  Communications and Networking.


Session Chair

Carlo Fischione (KTH)

Session DeepWireless-TS3

Technical Session 3 - Deep Learning for 5G Communications

Conference
3:00 PM — 5:00 PM EDT
Local
May 20 Sat, 12:00 PM — 2:00 PM PDT

RIS-Empowered MEC for URLLC Systems with Digital-Twin-Driven Architecture

Sravani Kurma (National Sun Yat-sen University, Taiwan); Keshav Singh (National Sun Yat-sen University, Kaohsiung, Taiwan); Mayur Vitthalrao Katwe (National Sun Yat-sen University, Taiwan); Shahid Mumtaz (Instituto de TelecomunicaÁıes, Portugal); Chih-Peng Li (National Sun Yat-sen University, Taiwan)

0
This paper investigates a digital twin (DT) enabled re-configurable intelligent surface (RIS)-aided mobile edge computing (MEC) system under given constraints on ultra-reliable low latency communication (URLLC). In particular, we focus on the problem of total end-to-end (e2e) latency minimization for the considered system under the joint optimization of beamforming design at RIS, power and bandwidth allocation, processing rates, and task offloading parameters using DT architecture. To tackle the formulated non-convex optimization problem, we first model it as a Markov decision process (MDP), and later we adopt a deep reinforcement learning (DRL) algorithm to solve it effectively. Simulation results confirm that the proposed DT-enabled resource allocation scheme for the RIS-empowered MEC network achieves up to 60% lower transmission delay and 20% lower energy consumption compared to without RIS scheme.
Speaker KURMA SRAVANI

Sravani Kurma (Graduate student member, IEEE) received the B.Tech. degree in Electronics and Communication Engineering from the JNTUH college of Engineering, Jagtial, India, in 2017, and Master's degree (gold medalist) in Communication System Engineering from Visvesvaraya National Institute of Technology, Nagpur, India, in 2019. She is currently pursuing Ph.D in Institute of Communications Engineering (ICE) in National Sun Yat-sen University, Taiwan. Her current research interests include 5G, 6G, Industrial internet of things (IIoT), wireless energy harvesting (EH), cooperative communications, Reconfigurable intelligent surfaces (RIS), Full-duplex communication, cell-free MIMO, and ultra-reliable and low latency communication (URLLC).


Shallow Neural Networks for Channel Estimation in Multi-Antenna Systems

Dheeraj Raja Kumar (Centre TecnolÚgic de Telecomunicacions de Catalunya & Universitat Politecnica de Catalunya, Spain); Carles AntÛn-Haro (Centre Tecnologic de Telecomunicacions de Catalunya (CTTC), Spain); Xavier Mestre (Centre TecnolÚgic de Telecomunicacions de Catalunya (CTTC), Spain)

0
In this paper, we investigate neural network-based channel estimation strategies for point-to-point multi-input multi-output (MIMO) systems. In an attempt to keep computational complexity low, we restrict ourselves to shallow architectures featuring one single hidden layer. Specifically, we consider (i) fully-connected feedforward neural networks; and (ii) 1D/2D convolutional neural networks. The analysis includes an assessment of the estimation error performance, along with the computational complexity associated to the training and inference phases. Several benchmarks are considered, such as the conventional least squares or (linear) MMSE estimators, and other deep neural network architectures from the literature.
Speaker Dheeraj Raja Kumar (Centre TecnolÚgic de Telecomunicacions de Catalunya - CTTC)

Dheeraj is pursuing doctoral studies at CTTC, Barcelona. He holds a masters degree in Applied Telecommunication and Engineering Management from UPC-Barcelona, and bachelors from City University of Hong Kong. The research focus centers around AI/ML for PHY layer.


Multi-Agent Deep Reinforcement Learning for the Access Point Activation Strategy in Cell-Free Massive MIMO Networks

Li Sun (Auburn University, USA); Jing Hou (California State University San Marcos, USA); Richard Chapman (Auburn University, USA)

0
Cell-free massive MIMO network is a promising solution to support 5G communication. However, as an ultra-dense network (UDN), it suffers from two major issues: power consumption and scalability, which hinders it to be widely used in practice. The increasing and time-varying user demand, the dynamic propagation environment, and the huge amount of access points (APs) make it a challenging task to address these issues. For this purpose, we propose a distributed solution to solve the access point activation (APA) problem in cell-free massive MIMO networks to reduce power consumption considering dynamic user demand. We leverage the user-centric characteristic and design a multi-agent deep reinforcement learning (MADRL) algorithm by which each AP independently decides whether it needs to be switched on or off. The simulation results show that the MADRL outperforms the centralized and the random strategies. This research demonstrates the ability of distributed learning in solving the APA problem in cell-free massive MIMO networks.
Speaker Jing Hou (California State University San Marcos)

Jing Hou is an assistant professor in the Department of Computer Science and Information Systems at California State University San Marcos. Her research interest includes cybersecurity, network economics, game theory, wireless communications, and machine learning.


Untrained Neural Network based Bayesian Detector for OTFS Modulation Systems

Hao Chang, Alva Kosasih and Wibowo Hardjawana (The University of Sydney, Australia); Xinwei Qu and Branka Vucetic (University of Sydney, Australia)

0
The orthogonal time frequency space (OTFS) symbol detector design for high mobility communication scenarios has received a lot of attention lately. Current state-of-the-art detectors can be divided into two categories; iterative and trained DNN detectors. Many practical iterative detectors highly rely on the accuracy of initial symbol estimates, whereby minimum-mean-square-error (MMSE) denoiser is used prior to iterative symbol detection. Trained DNN based detectors however suffer from dependency on the availability of large computation resources and the fidelity of synthetic datasets for the training phase which are both costly.
In this paper, we propose a decoder-only deep image prior (DIP) architecture to replace MMSE denoiser in the iterative detector referred to as D-DIP. DIP is a type of DNN that does not require training. We choose the Bayesian parallel interference cancellation (BPIC) for the iterative detector in order to have the lowest computational complexity. Our simulation results show that the symbol error rate (SER) performance of our proposed D-DIP-BPIC detector outperforms practical state-of-the-art detectors by 0.5 dB and offers the lowest computational complexity.
Speaker Hao Chang (The University of Sydney)



Recurrent Neural Network Based RACH Scheme Minimizing Collisions in 5G and Beyond Networks

Siba Narayan Swain and Ashit Subudhi (Indian Institute of Technology Dharwad, India)

0
Limited preambles in 5G New Radio (NR) can be a bottleneck on the performance of network access procedures. Due to the limited number of preambles, there is a non-zero probability that two mobile User Equipments (UEs) selecting same preamble signatures leading to collisions. Consequently, the base stations (gNBs) in 5G Radio Access Network (RAN) are unable to send a response to the UEs. Furthermore, with the increase in the number of cellular UEs and Machine Type Communication (MTC) devices, the probability of such preamble collisions further increases, thereby leading to reattempts by UEs. This in turn, results in increased latency and reduced channel utilization. In order to reduce contention during preamble access, we propose to use deep learning based models to design a RACH procedure that predicts the incoming connection requests in advance and proactively allocates uplink resources to UEs. We have used Recurrent Neural Network (RNN) which is provided with the history of connection requests to predict UEs which are going to participate in contention based RACH procedure. Finally, we propose a RNN based RACH scheme where the gNB uses RNN model along with the standard RACH process to reduce preamble collisions. On doing extensive simulations, it is observed that there is a significant reduction in the number of collisions when the proposed scheme is employed in a dense user scenario thereby proving the efficacy of the proposed scheme in enabling massive access of users to 5G network.
Speaker Siba Narayan Swain (IIT Dharwad, India)

Dr. Siba Narayan Swain has completed his MS and PhD in Computer Science Engineering from IIT Madras, India. His research interests include 5G Mobile Networks, Data Driven Networking, Privacy Preservation in next generation networks. He is actively exploring the use of modern technologies such as AI and Blockchains to fill the gap and enhance the performance as well as user experience while using 5G/6G services. Presently, he is an Assistant Professor at IIT Dharwad who teaches various courses related to computer/communication networks and spends time with students conducting R&D in cutting edge technologies.


Session Chair

Xuyu Wang (Florida International University)

Session DeepWireless-CS

Closing Session

Conference
5:00 PM — 5:10 PM EDT
Local
May 20 Sat, 2:00 PM — 2:10 PM PDT

Session Chair

Xuyu Wang (Florida International University)


Gold Sponsor


Gold Sponsor


Bronze Sponsor


Student Travel Grants


Student Travel Grants


Local Organizer

Made with in Toronto · Privacy Policy · INFOCOM 2020 · INFOCOM 2021 · INFOCOM 2022 · © 2023 Duetone Corp.