Ph.D Theses
Permanent URI for this collection
Browse
Browsing Ph.D Theses by Subject "Computer Science and Engineering"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Air quality Prediction using Deep Learning Techniques(Avinashilingam, 2024-06) Shree Nandhini P; Dr. P.AmudhaMachine Learning models and Deep Learning models have been widely used to predict the air quality. Monitoring air quality involves both regulatory measures and public awareness campaigns to reduce emissions from various sources such as vehicles, industrial activities, agriculture, and household combustion. Air pollution predict is very useful for informing about the pollution level that allow policy makers to adopt measures for reducing its impact. Over the past few decades, due to human activities, industrialization, and urbanization, air quality condition has become a life-threatening factor in many countries around the world. It causes various illnesses such as respiratory tract and cardiovascular diseases. Hence, it is necessary to accurately predict the PM2.5 concentrations in order to prevent the citizens from the dangerous impact of air pollution. Air pollution refers to the presence of harmful or excessive quantities of substances in the air we breathe, which can be detrimental to human health, the environment and ecosystems. These substances, known as pollutants, can come from various sources, including industrial activities, vehicle emissions, agricultural practices, and natural phenomena. Common air pollutants include particulate matter (PM), nitrogen oxides (NOx), sulfur dioxide (SO2), carbon monoxide (CO), volatile organic compounds (VOCs), and ozone (O3). The air quality prediction is used to predict the future state of air quality in a particular location based on the existing data, such as historical air quality data. In the first phase of the research work, an Improved Sparse Auto Encoder-Deep Learning Algorithm (ISAE-DL) is used to predict the air quality system and the feed forward neural network is utilized as a sparse auto encoder. The combined k-Nearest Neighbor Euclidean Distance (kNN-ED) and kNN- Dynamic Time Warping Distance (kNN-DTWD) is used to acquire the particulate matter and the meteorological data. In addition, Artificial Neural Network (ANN) and Long Short-Term Memory (LSTM) are used to acquire the relative information and the classification model is generated with training data. In the second phase of research work, Voronoi Clustering Sparse Auto Encoder- Deep Learning (VCSAE-DL) is developed to handle the long time delay based locations for better air quality prediction. Then, the temporal and spatial features are identified to retrieve the most important features for air quality prediction. The formation of clusters is continued with different centers and the clustering process is stopped until all the data are covered. The clustered data and the terrain information are given as input to the Neural Network layer such as Artificial Neural Network (ANN), Convolutional Neural Network (CNN) and Long Short- Term Memory (LSTM) and their results are combined and transferred to Sparse Auto encoder for the prediction of air quality. This method efficiently reduces the long-term delay issues, but this method can also suffer to learn from the long-term dependencies of air pollutant concentrations. In the third phase of research work, a Transferred Stacked Bidirectional and Unidirectional Long Short-Term memory (T-SBU-LSTM) is proposed to minimize the long term dependencies for LSTM for air quality prediction. Then, the Transferred Stacked Bidirectional and Unidirectional LSTM (T-SBU-LSTM) was adopted in learning from long- term PM2.5 dependencies, and it uses Transfer learning to transfer knowledge from smaller temporal resolutions to higher temporal resolutions. Transfer learning is used to improve prediction accuracy at higher temporal resolutions which identifies the similarities between two separate datasets, tasks, or models to transmit data from the source to the new domain. This combined architecture enhances the feature learning from the large-scale spatial- temporal time series data by learning both forwards and backward dependencies. This phase of research expands the air quality prediction from a specific location to several adjacent locations varying small period to long period time delays. In the fourth phase of work, Wasserstein Distance - Deep Transfer Learning (WD- DTL) is proposed to reduce the learning time of Transfer Learning. Then, the Wasserstein distance based Deep Transfer Learning (WD-DTL) is constructed to learn invariant features between source and target domains. Initially, a base LSTM model is trained with sufficient data in source domain. Finally, the developed approaches like Improved Sparse Auto Encoder Using Deep Learning (ISAE-DL), Voronoi Clustering Sparse Auto Encoder Using Deep Learning (VCSAE-DL), Transferred Stacked Bidirectional and Unidirectional Using Long Short Term Memory Algorithm (T-SBU-LSTM) and Wasserstein distance using Deep Transfer Learning (WD-DTL) based air quality prediction system were compared using the performance metrics, Accuracy, Precision, Specificity, Sensitivity, AUC, MCC and MAER. The experimental results proved that WD-DTL based air quality prediction system accomplishes better than the other prediction algorithms.Item Energy Efficient Congestion Control Techniques in Wireless Sensor Networks(Avinashilingam, 2024-06) Vanitha G; Guide - Dr. P.AmudhaIn Wireless Sensor Network (WSN), the congestion is controlled by many strategies like congestion detection and avoidance. The rate control is one of the most significant strategies for mitigating the congestion. The priority based rate control algorithms have been proposed in the literature to overcome the congestion due to transmission of the Real Time (RT) data together with the Non-Real Time (NRT) data,but congestion in a network still remains a challenge. The RT data traffic may often be bursty in nature, combined with the high priority NRT data makes the problem more compounded. Neither the fair allocation of bandwidth on different nodes nor prioritizing the traffic class suffices to overcome congestion in a network. As a long queue might sometimes use more than half of the buffer, leading to significant packet loss and delay, a Proficient Rate Control (PRC) algorithm is proposed in the first phase of research by considering Weighted Priority Difference of Differential Rate Control (WPDDRC) with an adaptive priority based system to address the buffer occupancy and queue size. Each child node's input packets are accumulated in one of two virtual queues on a single physical queue, one for low priority traffic and another for high-priority traffic, both are made possible by the PRC algorithm. When a packet is successfully received, the PRC determines whether there is congestion in the virtual queue and then adjusts the transmission rate of child node accordingly. The PRC technique may control the consequent buffer overflow and congestion in WSN by considering the priority of each traffic type and the current queue status. The Proficient Rate Control with Fair Bandwidth Allocation (PRC-FBA) method is proposed in the second phase of research, with the principles of traffic type priority and equitable assignment of bandwidth. The Signal to Noise Interference Ratio (SINR) model is used for bandwidth distribution in WSN which is used to balance between fairness and performance. Next, a novel utility factor for bandwidth is given in terms of productiveness and fairness. The approximate solution is derived from the sum of the node-to-node computation and the allocation of time slots. Then, the problem has been framed as a non-linear programming problem, partitioned into two halves and the 2-phase approach has been adopted. During the first stage, the connections between nodes are calculated, and in the second stage, time slots are allotted with the goal of optimizing the utility factor. As a consequence, WSNs are able to increase their efficiency and achieve more equitable bandwidth distribution. Proficient Rate Control Data Aggregation Fair Bandwidth Allocation (PRCDA-FBA) is proposed in the third phase of research that makes use of a powerful data aggregation mechanism to maximize the equitable consumption of battery life across all involved nodes. On the other hand, Random Linear Network Coding (RLNC) is used to reduce transmission frequency, increased network channel capacity, which enhanced overall network throughput. The network coding path combines data for transmission to the next hop, increasing channel usage and reducing packet redundancy in the network. When congestion occurs, an adaptive methodology is triggered in which node transmits data using network coding to decrease packet dropping rate. In addition, Long Short-Term Memory (LSTM) recurrent neural networks, which can learn long-term dependencies, enhance the bandwidth allocation of PRCDA-FBA. They have a temporal dimension that allows them to determine patterns in data sequences. The bandwidth utilized in the past events with parameters like packet drop rate, energy, priority of packets and delay of packets are used to predict the future bandwidth requirements of path. In the fourth phase of research work, Enhanced Priority Rate Control Data Aggregation Fair Bandwidth Allocation (EPRCDA-FBA) is proposed to further save energy utilization and improve network life time. The major purpose of this work to ensure that Quality of Service (QoS) standards are met in terms of delayed data delivery,reduced energy consumption of energy-intensive nodes and increased network lifespan. This protocol's priority-based technique of regulating data transfer rates considers the node's spare processing capacity of node. Then, a prediction model is utilized to work out how much of a dip in node transmit power can be tolerated without significantly impacting the packet delivery ratio. Then, to avoid overhearing energy-critical nodes, a priority of nodes for delivering traffic classes of packets is determined using a combination of energy.Item Hybrid Transfer Learning Models for Video Anomaly Detection in Surveillance Systems(Avinashilingam, 2025-03) Sreedevi R Krishnan; Guide - Dr. P. AmudhaIn a modern civilized community, public safety is of prime importance, and the detection of anomalous events has become a vital factor for a successful security system. Conventional Video Surveillance (VS) methods are inadequate for identifying anomalous events by themselves. It is due to their inability to analysis large sequential data in dynamic environments. Video Anomaly Detection (VAD) has undergone swift development with the emerging Artificial Intelligence (AI) technologies. The research addresses the challenges of conventional VAD and proposes hybrid Deep learning (DL) models using transfer learning techniques for an efficient VAD system in dynamic environments. The application range of VAD is not limited to but includes social, commercial and industrial surveillance systems. Traffic management in urban areas, crowd management, emergency response and resource optimization are VAD's key application fields. The wide requirement for intelligent VAD inspires the design and development of reliable and efficient video surveillance systems capable of automating Anomaly Detection (AD) with minimal human intervention. The study develops and evaluates four hybrid models for VAD using Deep Learning techniques based on transfer learning to obtain improved performance. The first research phase proposed a CNN-YOLO hybrid model capable of anomaly detection. This model uses CNN for model training and modified YOLOv4 for object detection, ensuring accurate and high-speed anomaly detection. This model processes a single random frame out of 100 input frames and yields a faster response. The CNN-YOLO have high accuracy and faster response but being a small model, it samples a random frame input only. To overcome this limitation and the inability of sequential video processing of the CNN-YOLO model, a hybrid model comprised of Residual Network (ResNet) and Long Short-Term Memory (LSTM) was executed in the second phase. This model can execute feature extraction and sequential information processing in more than thousands of video frames. ResNet-50 is employed for spatial feature extraction and LSTM to capture temporal relationships of the input video data even though this hybrid model enhances detection capability but has low accuracy, efficiency and generalization skill due to overfitting. In the third research phase, a segmentation-based anomaly detection technique is implemented, reducing overfitting. This model is constituted by hybridizing Improved UNet (IUNet) with the Cascade Sliding Window Technique (CSWT). In the IUNet-CSWT hybrid model, standard convolutional layers of IUNet are replaced with a ConvLSTM for spatiotemporal feature extraction. CSWT estimates the anomaly score of the input video and thus classifies it to normal and anomalous events. This model is equipped for processing complex patterns since it has an effective equilibrium between generalization skill and precision. A low false positive rate and high detection accuracy make the model work effectively in crowded environments. The fourth phase implemented a Hierarchical Multiscale-CNN with LSTM model, enhancing multi-scale feature identification and temporal data analysis. This model can work efficiently even in low-resolution video utilizing a Bilateral-Wave Denoise Technique. Multi-scale CNN is augmented by a Spatial Pyramid Pooling (SPP), which enhances the feature extraction. The performance of this model outperforms all the other models.Item Recomander System of Conductive Ink of Printed Electronics Application Using Deep Neural Networks(2024-10) Alagusundari N; Dr. S. SivakumariPrinted Electronics (PE) is a growing subfield in the field of electronics manufacturing and material science. It enables the fabrication of electrical and photonic devices using printing techniques such as inkjet, screen printing with conductive inks. PE facilitates the printing of a wide array of electronic components on various substrates, thereby enabling the construction of conventional circuits. The rapid expansion of PE across industrial sectors have sparked significant interest due to its capacity to produce intricate components. A fundamental aspect of PE lies in the application of conductive ink during printing process, which is pivotal in developing flexible electronic circuits and enhancing the communicative capabilities of objects. The selection of appropriate ink is paramount in meeting consumer requirements and ensuring product functionality. Traditionally, ink selection has been a manual task, heavily reliant on the expertise of designers. This conventional approach is time-consuming and may not always yield optimal results. Hence, there is a growing need to design an automated system for ink selection in printing applications. The fundamental focus of the research work is to build automated systems for choosing conductive ink for PE applications using neural network, metaheuristic algorithm, and deep learning model. The introduced models for conductive ink selection in PE are as follows: An automated system using Multilayer Perceptron Neural Network (MLPNN) and Support Vector Machine (SVM) for conductive ink selection in PE. A conductive ink selection system using Particle Swarm Optimization- MLPNN (PSO-MLPNN) A system to pick a suitable conductive ink for PE applications with the help of Convolutional Neural Network (CNN) The first phase of this research work deals with the development of an automated system for conductive ink selection using MLPNN. Input data is normalized into a common range between 0 and 1 using min-max technique. Two models namely MLPNN and SVM are separately designed and trained to capture the intricate relationship between input and output variables. These trained models are used to select the conductive ink based on the input data. Performance of the presented system is analysed by varying number of hidden layers, number of hidden neurons, and number of training and testing samples. Efficacy of the models are evaluated by computing accuracy, recall, precision, F1 score, balance classification rate, and miss classification rate. The second phase of this research work introduces a novel method for choosing conductive ink for PE employing PSO and MLPNN. In this method, input data is preprocessed using min-max method. A MLPNN is designed and trained using PSO algorithm to learn the relationships between input and output variables. Finally, trained PSO-MLPNN is used to select ink based on input features. Similar to first phase, performance of the presented system is analysed by varying parameters and evaluated using various metrics, and compared with standard MLPNN. Third phase of this research work builds an automated system to improve accuracy using 1D CNN. Input data is preprocessed with min-max method. A 1D CNN is designed and trained to choose conductive ink for PE applications. Efficacy of the model is evaluated by computing accuracy, recall, precision, F1 score, balance classification rate, and miss classification rate and compared with SVM, MLPNN and PSO-MLPNN.Item Study on the Performance of Deep Learning Techniques for the Classification of Parkinson’s Diseases(Avinashilingam, 2024-02) Sabeena B; Dr. S. SivakumariMachine Learning and Deep Learning are promising technologies that have the potential to assist and support clinicians in providing an objective and reliable diagnosis. These technologies play a crucial role in analyzing vocal features for the early detection and monitoring of Parkinson's disease. The advancements in these fields aim to furnish objective and quantitative measures of vocal impairments, thereby offering valuable insights for both diagnosis and the evaluation of treatment outcomes. Parkinson's disease (PD) is a progressive neurodegenerative disorder that primarily affects movement and is characterized by a range of motor and non-motor symptoms. One notable non-motor symptom is vocal impairment, which can significantly impact communication and quality of life for individuals with PD. The manifestation of vocal impairments in Parkinson's disease is commonly referred to as hypokinetic dysarthria. Vocal features are increasingly recognized as important diagnostic markers for Parkinson's disease. Assessment of vocal impairments, along with other motor and non- motor symptoms, contributes to a comprehensive diagnosis. The main goal of this research work is to develop an accurate and robust classification model for Parkinson's disease (PD) by employing deep learning techniques, specifically focusing on vocal features. The study aims to achieve the objectives through Feature extraction, Dimensionality reduction & feature ranking, Feature Selection using optimisation techniques, Ensemble feature selection methods and Ensemble Deep Learning classifiers.Initially, feature extraction technique is implemented to extract Vocal fold, TQWT,WT, MFCC, Time frequency and baseline features were extracted from dataset. Dimensions are reduced by introducing Kernel based Principal Component Analysis (KPCA).It attempts to find a linear subspace of lower dimensionality than the original sound recording feature space, where the new sound recording features of PD have the largest variance. Minimum Redundancy Maximum Relevance (mRMR) technique is introduced for selecting informative features. High relevance score based on class label are selected and redundant features are eliminated. Finally, Fuzzy Convolution Long Short- Term Memory based Convolutional Neural Network (FCLSTM-CNN) classifier is introduced for PD classification. Here, triangular membership function is used for burning bias and weight values. In the mRMR algorithm, NP-hard problems will occur and it can be solved by using the Swam Intelligence methods. In the second phase of the study, the Fuzzy Monarch Butterfly Optimization Algorithm (FMBOA) is implemented to effectively select crucial features from the dataset, thereby improving the Parkinson's Disease (PD) detection rate. In this phase, the weight value plays a crucial role in the quest for optimal features within the dimensionality-reduced feature set. The computation of the weight value involves the use of the Gaussian fuzzy membership function, contributing to the algorithm's enhanced performance. Various classification algorithms are employed to evaluate different feature sets derived from FMBOA, each exhibiting distinct combinations. The proposed Fuzzy Convolution Bi-Directional Long Short-Term Memory (FCBi-LSTM) classifier is introduced for PD classification. This innovative classifier amalgamates multiple speech feature types at the feature level, facilitating the identification of individuals with PD in contrast to those without. Furthermore, there is potential for extending this work to incorporate the Ensemble Feature Selection algorithm to enhance overall accuracy. In the third phase, Optimization Based Ensemble Feature Selection (OBEFS) algorithm has been proposed to select features based on the consensus. OBEFS algorithm is introduced to combine three methods such as Fuzzy Monarch Butterfly Optimization Algorithm (FMBOA), Levy Flight Cuckoo Search Algorithm (LFCSA), and Adaptive Firefly Algorithm (AFA). Correlation function is introduced to combine the results of methods in order to select the optimal features. The proposed OBEFS algorithm yields better results for three feature subsets than other combinations of feature sets. The optimal features selected through OBEFS algorithm is used to train a Fuzzy Convolutional Bi- Directional Long Short-Term Memory (FCBi-LSTM) classifier. This can be extended to ensemble learning. Ensemble learning evaluates a variety of approaches instead of single classification algorithms, and final results are generated by merging outputs of classifiers. In the final phase, Ensemble Deep Learning (EDL) classifiers are considered for the classification of Parkinson’s disease. EDL include FCBi-LSTM, Contractive Auto- encoder (CAE), and Sparse Auto-encoder (SAE). CAEs represent robust adaptations of standard autoencoders, demonstrating an ability to acquire reduced sensitivity to minor variations in data. SAEs play a pivotal role in training Neural Network (NN) classifiers aimed at identifying Parkinson's Disease (PD) within datasets. The amalgamation of results from Deep Learning classifiers is achieved through the application of stacked generalization. Enhanced predictive performance is observed in comparison to utilizing a single model when employing Ensemble Deep Learning (EDL) techniques. The proposed classifier undergoes experimentation and evaluation using a dataset sourced from the University of California-Irvine (UCI) Machine Learning repository.