I am an Assistant Professor in the Department of Electronics and Communication Engineering at the Indian Institute of Technology, Roorkee, India. Prior to that, I was a postdoctoral researcher at the University of California, Davis, USA from 2021 to 2023. I worked as a research scientist at the German Aerospace Center (DLR) in the Institute of Communications and Navigation, Neustrelitz, Germany from 2019 to 2021. I completed my Ph.D degree with highest honor (summa cum laude) in Electrical Engineering from the Institute of Computer and Network Engineering, TU Braunschweig in 2019 under Prof. Admela Jukan. I received my master's (M.S.) degree in Electrical Engineering from the Indian Institute of Technology Madras, India, in 2014, and a bachelor (B. Tech) degree in Electronics and Communications Engineering from SASTRA University, India in 2010. My areas of interest include quantum networking, datacenter networking, TeraHertz communication, resource allocation in optical networks, with a special focus on stochastic analysis and learning methods, and application of artificial intelligence in communication networks.
Curriculum vitae (pdf)
My research largely involves quantum and datacenter networking, theoretical and algorithmic design and analysis of network resource allocation, and application of artificial intelligence in communication networks.
Classical Internet employs buffering, switching, and routing functionality to send information over thousands of kilometers of under-sea fiber-optical cables, while exposing the data to attackers. Quantum networks provide an alternative paradigm where data can be encoded into photonic degrees of freedom, e. g., polarization, and transmitted as entangled (correlated) photons. This secures quantum bits (qubits) from attackers due to its properties, such as qubits cannot be amplified, duplicated or measured without altering them. However, the future quantum networks need to imitate the functionality of classical Internet, which is challenging. We are developing a quantum wrapper (QW) protocol, which wraps a quantum payload, i.e., actual information, into a header carrying routing information. The header and payload are transmitted separately in time or frequency without affecting the data. This will ensure that the QW protocol can incorporate today’s networking protocols and coexist with classical network.
Today's datacenter (DC) and high performance computing (HPC) systems need to support reconfigurable data plane (hardware) and control/management plane (software) solutions leveraging the unique benefits of optical interconnect technologies and machine-learning-aided network optimization techniques. We work towards designing a novel agile HPC system with low-diameter and application-driven elastic optical bandwidth assignment· The goal is to provide a low-latency communication network that can serve today’s and tomorrow’s data-intensive applications that require fast and efficient data movement tailored to the applications’ communication profile.
The communication and navigation systems need to improve the integration of various data sources and services into a networked system for the detection and management of security scenarios in real time. The aim is to scientifically investigate which methods provide best possible results in the field of Autonomous Identification System (AIS) and remote sensing data in the detection of anomalies.
The efficient allocation of optical resources is challenging with the elastic optical networks (EONs), as setup and tear down of non-uniform bandwidth requests fragments spectrum in spectral, time and spatial dimensions. Therefore, it must be modeled accurately, and managed efficiently and intelligently. Theoretical models and algorithms are presented for resource allocation in EONs.We also utilize machine learning techniques for handling resources and traffic in optical datacenter networks, and managing bandwidth and energy consumption in network and edge devices in fiber-wireless networks.
Understanding and representing traffic patterns are key to detecting anomalous trajectories in the transportation domain. However, some trajectories can exhibit heterogeneous maneuvering characteristics despite confining to normal patterns. Thus, we propose a novel graph-based trajectory representation and association scheme for extraction and confederation of traffic movement patterns, such that data patterns and uncertainty can be learned by deep learning (DL) models. This paper proposes the usage of a recurrent neural network (RNN)-based evidential regression model, which can predict trajectory at future timesteps as well as estimate the data and model uncertainties associated, to detect anomalous maritime trajectories, such as unusual vessel maneuvering, using automatic identification system (AIS) data. Furthermore, we utilize evidential deep learning classifiers to detect unusual turns of vessels and the loss of transmitted signal using predicted class probabilities with associated uncertainties. Our experimental results suggest that the graphical representation of traffic patterns improves the ability of the DL models, such as evidential and Monte Carlo dropout, to learn the temporal-spatial correlation of data and associated uncertainties. Using different datasets and experiments, we demonstrate that the estimated prediction uncertainty yields fundamental information for the detection of traffic anomalies in the maritime and, possibly in other domains.
Artificial intelligence (Al) is an extensive scientific discipline which enables computer systems to solve problems by emulating complex biological processes such as learning, reasoning and self-correction. This paper presents a comprehensive review of the application of Al techniques for improving performance of optical communication systems and networks. The use of Al-based techniques is first studied in applications related to optical transmission, ranging from the characterization and operation of network components to performance monitoring, mitigation of nonlinearities, and quality of transmission estimation. Then, applications related to optical network control and management are also reviewed, including topics like optical network planning and operation in both transport and access networks. Finally, the paper also presents a summary of opportunities and challenges in optical networking where Al is expected to play a key role in the near future.
Traffic prediction and utilization of past information are essential requirements for intelligent and efficient management of resources, especially in optical data center networks (ODCNs), which serve diverse applications. In this paper, we consider the problem of traffic aggregation in ODCNs by leveraging the predictable or exact knowledge of application-specific information and requirements, such as holding time, bandwidth, traffic history, and latency. As ODCNs serve diverse flows (e.g., long/ elephant and short/mice), we utilize machine learning (ML) for prediction of time-varying traffic and connection blocking inODCNs.Furthermore,withthe predictedmean service time, passed time is utilized to estimate the mean residual life (MRL) of an active flow (connection). The MRL information is used for dynamic traffic aggregation while allocating resources to a new connection request. Additionally, blocking rate is predicted for a future time interval based on the predicted traffic and past blocking information, which is used to trigger a spectrumreallocation process (also called defragmentation) to reduce spectrum fragmentation resulting from the dynamic connection setup and tearing-down scenarios. Simulation results showthatML-based prediction and initial setup times (history) of traffic flows can be used to further improve connection blocking and resource utilization in space-division multiplexed ODCNs.
In this paper, we investigate an animal-human cohabitation problem with the help of machine learning and fiber-wireless (FiWi) access networks integrating cloud and edge (fog) computing. We propose an early warning system which detects wild animals near the road/rail with the help of wireless sensor networks and alerts passing vehicles of possible animal crossing. Additionally, we show that animals’ detection at the earliest and the related processing, if possible, at sensors would reduce the energy consumption of edge devices and the end-to-end delay in notifying vehicles, as compared to the scenarios where raw sensed data needs to be transferred up the base stations or the cloud. At the same time, machine learning helps in classification of captured images at edge devices, and in predicting different time-varying traffic profiles— distinguished by latency and bandwidth requirements—at base stations, including animal appearance events at sensors, and allocating bandwidth in FiWi access networks accordingly. We compare three scenarios of processing data at sensor nodes, base stations and a hybrid case of processing sensed data at either sensors or at base stations, and showed that dynamic allocation of bandwidth in FiWi access networks and processing data at its origin lead to lowering the congestion of network traffic at base stations and reducing the average end-to-end delay.
Elastic optical networks are prone to spectrum fragmentation, resulting in poor resource utilization and often higher blocking probability. To overcome the spectrum fragmentation, a defragmentation (DF) of the spectrum can be applied by reconfiguring some or all active connections. However, reconfiguration is generally not desirable, as it can interrupt the services of the existing connections. In this paper, we propose two novel connection reconfiguration schemes to efficiently address spectrum DF: (i) a reactive–disruptive scheme and (ii) a proactive–non-disruptive scheme. Both schemes utilize the information of holding times of the existing connections in order to reduce fragmentation and thus improve resource utilization and minimize connection blocking. The reactive–disruptive reconfiguration scheme tries to allocate an incoming connection, which could not be accepted otherwise, by reconfiguring some existing connections. The proactive–non-disruptive reconfiguration method, on the other hand, allows us to reconfigure only such connections, which can be shifted without crossing over the spectrum of other connections. This paves the way for a non-disruptive DF when setting up a new connection, which is a desirable feature. Simulation results show that the reconfiguration schemes with holding-time awareness can be effectively utilized to reduce the spectrum fragmentation and, hence, result in better resource utilization and overall lower connection blocking probability.
The spectrum efficiency and the connection blocking in Elastic Optical Networks (EONs) depend on how well the spectrum is managed by a spectrum allocation (SA) policy. Although spectrum fragmentation can be minimized by an efficient SA policy, it cannot be avoided. Moreover, heterogeneous bandwidth demands further exacerbate the detrimental effect of spectrum fragmentation in terms of spectrum efficiency and connection blocking probability. Recently, a few heuristic-based defragmentation (DF) schemes have been proposed to reconfigure some or all connections in the network. However, the analytical study of the effect of defragmentation on the connections’ blocking probability is still under-explored. In this paper, we analyze the effect of defragmentation rate on different models depending on whether DF is proactively (Pro-DF) or reactively (Re-DF) performed. Additionally, a combined Proactive-Reactive-Delayed (Pro-Re-DL-DF) model is presented, which admits requests in a delayed fashion (similar to scheduled connections) that would have been blocked otherwise due to fragmentation. We analytically show that, under certain conditions, defragmentation process under different SA policies can reduce the overall connection blocking probability. We also illustrate that the model can be used in other scenarios, such as to increase security of optical link against eavesdropping. To this end, we model the busy/idle patterns of an elastic optical link (EOL) using a multi-class continuous-time Markov chain under two different SA approaches, first-fit and random-fit. Analytical and simulation results show the positive effect of defragmentation process depends on the rate at which it is performed, the link load and the EOL capacity.
Utilizing the dormant path diversity through multipath routing in the Internet to reach end users-thereby fulfilling their QoS requirements-is rather logical. While offering better resource utilization, better reliability, and often even much better quality of experience (QoE), multipath routing and provisioning was shown to help network and data center operators achieve traffic engineering in the form of load balancing. In this survey, we first highlight the benefits and basic Internet multipath routing components. We take a top-down approach and review various multipath protocols, from application to link and physical layers, operating at different parts of the Internet. We also describe the mathematical foundations of the multipath operation, as well as highlight the issues and challenges pertaining to reliable data delivery, buffering, and security in deploying multipath provisioning in the Internet. We compare the benefits and drawbacks of these protocols operating at different Internet layers and discuss open issues and challenges.
We demonstrate the correlation between co-propagating classical and quantum bits for a quantum wrapper networking. The preliminary experiment shows the visibility of more than 75% for the quantum bits and the bit error rate less than 5E-7 for the classical bits.
Classical optical devices lack precision when they operate on single photons. We report a Quantum Digital Twin (QDT) to improve Quantum Key Distribution (QKD) implementations. We show a QDT increasing the Key Exchange Rate under environmental events.
The high bandwidth and low latency requirements of modern computing applications with their dynamic and nonuniform traffic patterns impose severe challenges to current data center (DC) and high performance computing (HPC) networks. Therefore, we present a dynamic network reconfiguration mechanism that could satisfy the time-varying applications’ demands in an optical DC/HPC network. We propose a direct and an indirect topology extraction methods based on a machine learning-aided traffic prediction approach under multi-application scenario. The traffic prediction for topology extraction and bandwidth reconfiguration (PredicTER) method could lead to frequent topology and bandwidth reconfiguration. In contrast, the indirect approach, namely traffic prediction with clustering for topology extraction and bandwidth reconfiguration (PrediCLUSTER), utilizes an unsupervised learning-based clustering model to first associate the predicted traffic to one of possible traffic clusters, and then extracts a common topology for the cluster. This restricts the reconfigured topology set to the number of traffic clusters. Our simulation results show that the time-average of mean packet latencies (and total dropped packets) over 60 seconds of timevarying traffic under the PredicTER, PrediCLUSTER and a static topology are 37.7μs,41.2μs, and 50.2μs (and 37,967, 12,305, and 36,836), respectively. Overall, the PredicTER (and PrediCLUSTER) method(s) can improve the end-to-end packet latency by 24.9% (and 17.8%), and the packet loss rate by −3.1% (and 66.6%), as compared to the static flat Hyper-X-like topology.
We study the performance of Hyper-Flex-LION optical interconnect architecture under dynamic traffic with traffic-prediction-aided multi-cluster reconfiguration. The simulation results show a 17.2% latency improvement and 36.9% packet loss reduction as compared to a fixed topology.
The automatic identification system (AIS) has become an essential tool for maritime security. Nevertheless, how to effectively use the static and dynamic voyage information of the AIS data in maritime traffic situation awareness is still a challenge. This paper presents a comparative study of artificial intelligence (AI) techniques on their effectiveness in dealing with various anomalies in maritime domain using the AIS data. The AIS on-off switching (OOS) anomaly is critical in maritime security, since AIS technology is susceptible to manipulation and it can be switched on and off to hide illegal activities. Thus, we try to detect and distinguish between intentional and non-intentional AIS OOS anomalies through our AI -assisted anomaly detection framework. We use AIS data, in particular positional and navigational status of vessels, to study the effectiveness of seven AI techniques, such as artificial neural network, support vector machine, logistic regression, k-nearest neighbors, decision tree, random forest and naive Bayes, in detecting the AIS OOS anomalies. Our experimental results show that ANN and SVM are the most suitable techniques in detecting the AIS OOS anomalies with 99.9% accuracy. Interestingly, the ANN model outperforms others when trained with a balanced (i.e., same order of samples per class) dataset, and SVM, on the other hand, is suitable when training dataset is unbalanced.
The automatic identification system (AIS) reports vessels' static and dynamic information, which are essential for maritime traffic situation awareness. However, AIS transponders can be switched off to hide suspicious activities, such as illegal fishing, or piracy. Therefore, this paper uses real world AIS data to analyze the possibility of successful detection of various anomalies in the maritime domain. We propose a multi-class artificial neural network (ANN)-based anomaly detection framework to classify intentional and non-intentional AIS on-off switching anomalies. The multi-class anomaly framework captures AIS message dropouts due to various reasons, e.g., channel effects or intentional one for carrying illegal activities. We extract position, speed, course and timing information from real world AIS data, and use them to train a 2-class (normal and anomaly) and a 3-class (normal, power outage and anomaly) anomaly detection models. Our results show that the models achieve around 99.9% overall accuracy, and are able to classify a test sample in the order of microseconds.
Spectrum defragmentation (DF) as a connection reconfiguration method is essential in elastic optical networks (EONs) in order to minimize connection blocking probability. This paper proposes the first exact Markov model to computing exact blocking probabilities in EONs with DF by taking into account the occupancy status of spectrum slices of all network links, and waiting and serving connections during a DF process. Since the complexity of the exact Markov model increases exponentially with the network capacity and size, we propose a reduced state model where a link occupancy state is defined by the total occupied slices on a fiber link. Furthermore, using a spectrum fragmentation factor in each occupancy state we calculate state-and class-dependent connection setup rates, which is used to compute approximate blocking in EONs with DF. Notably, we show individually the distinct blocking values, one due to resource unavailability and the other due to fragmentation, both under a random-fit and a first-fit spectrum allocation policies. Our numerical results show that the DF process is very useful in reducing overall connection blocking in EONs. We also observe that blocking due to spectrum fragmentation can be reduced, but not eliminated in a mesh network topology even when an optimal DF scheme is deployed.
Optical networks are prone to power jamming attacks intending service disruption. This paper presents a Machine Learning (ML) framework for detection and prevention of jamming attacks in optical networks. We evaluate various ML classifiers for detecting out-of-band jamming attacks with varying intensities. Numerical results show that artificial neural network is the fastest (106 detection per second) for inference and most accurate (≈ 100%) in detecting power jamming attacks as well as identifying the optical channels attacked. We also discuss and study a novel prevention mechanism when the system is under active jamming attacks. For this scenario, we propose a novel resource reallocation scheme that utilizes the statistical information of attack detection accuracy to lower the probability of successful jamming of lightpaths while minimizing lightpaths’ reallocations. Simulation results show that the likelihood of jamming a lightpath reduces with increasing detection accuracy, and localization reduces the number of reallocations required.
We provide an overview of artificial intelligence techniques and their use in optical communication systems and networks with the aim of improving performance. Areas of application include optical transmission, performance monitoring, quality of transmission monitoring, as well as optical network planning and operation.
Scrambling or information randomization has been effectively used in various applications to prevent the information falling into the hands of rogue attackers. In this paper, we use the idea of spectrum scrambling to proactively randomize the spectrum across multiple fiber cores to improve connections' security, while at the same time effectively defragmenting the spectrum to improve connections' blocking performance. To this end, we propose a scheme called random spectrum defragmentation (RSD), and model the occupancy pattern of a multi-core fiber link using a multi-class continuous-time Markov chain (CTMC) under the two different spectrum allocation methods, first-fit and random-fit. As the efforts have been on preventing unauthorized access to optical channels, we show that scrambling can be effectively used to help dealing with the physical layer attacks. At the same time, numerical results show that also the blocking can be improved for a particular so-called randomization process (RP) arrival rate, and lower connection reconfiguration time.
The known technique of HTI-aware routing can be used for connection admission, or spectrum defragmentation. We show that HTI used for defragmentation is the most beneficial in reducing blocking in space-division multiplexed elastic optical networks.
Data randomization or scrambling has been effectively used in various applications to improve the data security. In this paper, we use the idea of data randomization to proactively randomize the spectrum (re)allocation to improve connections' security. As it is well-known that random (re)allocation fragments the spectrum and thus increases blocking in elastic optical networks, we analyze the tradeoff between system performance and security. To this end, in addition to spectrum randomization, we utilize an on-demand defragmentation scheme every time a request is blocked due to the spectrum fragmentation. We model the occupancy pattern of an elastic optical link (EOL) using a multi-class continuous-time Markov chain (CTMC) under the random-fit spectrum allocation method. Numerical results show that although both the blocking and security can be improved for a particular so-called randomization process (RP) arrival rate, while with the increase in RP arrival rate the connections' security improves at the cost of the increase in overall blocking.
Generally, an elastic optical network (EON) under a dynamic connection (non-uniform bandwidth) set up and termination scenario leads to spectrum fragmentation, and results in higher blocking probability. To overcome this problem, a defragmentation of the spectrum can be applied by reconfiguring some or all connections in the network. However, reconfiguration can interrupt the services of the existing connections if it is not performed in a non-disruptive manner, meaning that no data is lost or interrupted during reconfigurations. In this paper, we propose a different non-disruptive proactive reconfiguration scheme that utilizes the information of holding times (HT) of the connections in order to reduce fragmentation and thus improve resource utilization and minimize the connection blocking. Our reconfiguration method utilizes the HT information, and we allow to reconfigure only such connections, which can be shifted in parallel and without crossing-over the spectrum of other connections. This paves the way for a non-disruptive defragmentation when setting up a new connection. Simulation results show that the reconfiguration with holding time awareness can be effectively utilized to reduce the spectrum fragmentation, and hence results in lower connection blocking probability.
We analyze Defragmentation-as-a-Service (DaaS) in elastic optical networks and show that the positive effect of defragmentation depends on the rate at which it is performed, load (or, call arrival rates) and the available resources.
Reconfiguration of connections along frequency axis, - similar to frequency hopping in wireless systems, has been shown to improving security in elastic optical networks (EONs). However, the effect of spectrum reallocation on the connections' blocking and the associated connections' security is still under-explored. In this paper, we examine the effect of reconfiguration of connections on its security and blocking performance of an elastic optical link (EOL). We propose a Reconfiguration-as-a-Service (RaaS) method, which randomly reconfigures connections to improve its security, and for that scenario we model the connection patterns of an elastic optical link. We apply a multi-class continuous-time Markov chain under the first-fit and random-fit spectrum allocation (SA) method. Analytical results show that the random reconfiguration scheme can improve security at the cost of only a slight increase in blocking, depending on the rate at which it is performed, the link load and the capacity of an EOL. We hence show that it is indeed possible to balance the security and blocking performance with RaaS.
We investigate the planning of virtual optical bus (VOB) networks-as a packet-oriented all-optical solution for transport networks-under changing traffic conditions. The planning of a VOB network consists of grouping all edge-to-edge flows in the network into clusters called VOBs, and that has to be done with the objective of minimizing packet collision rate in the optical network. With the dynamic traffic, the planning should be iterated each time the traffic demand changes, and this requires excessive reconfiguration of already established optical buses. To address this issue, we present a heuristic algorithm called dynamic VOB network layout design (DVNLD) that progressively adapts an initially designed VOB network to traffic changes, while minimizing the required reconfiguration of established VOBs in the network. We also numerically analyze the proposed algorithm and show its effectiveness in adaptation to traffic variations and in reducing the required number of reconfigurations.
We analyze the performance of virtual optical bus (VOB) networks, which is a packet-oriented all-optical solution for transport networks. VOB groups the flows into clusters and coordinates their packets within each cluster in order to minimize the packet collision in the network. We theoretically study the multiplexing (grouping) of packets of flows in a VOB, by modeling a VOB node, in a VOB network, as a queuing system. We derive the expression for the packet loss rate due to inter-VOB collisions, and compare the performance (packet loss rate) of VOB with that of Optical Burst Switching (OBS). We show that the packet loss rate for VOB is upper bounded by the packet/burst loss rate of OBS.
We study the design of virtual optical bus (VOB) network, which has been recently proposed as a packet-oriented all-optical solution for transport networks. Design of a VOB network consists of grouping all edge-to-edge flows in the network into clusters called VOBs and that has to be done with the objective of minimizing packet collision rate in the optical network. We present an efficient heuristic algorithm, which can be utilized in networks with both ring and arbitrary meshed topology to find near optimal solutions to the VOB network design problem. Several design examples are presented and the results are compared to those obtained by applying a linear-programming-based design method. The comparisons show that the algorithm can find comparable solutions-in terms of network performance-in a much less amount of time.
My Github page:https://github.com/sansastra
Reviewer for:
IEEE Member, 2014 - present
Mentoring Experience
Flautist (Hindustani)