Latest 2026 Projects for UK Students

1. Resource-Efficient Gesture Recognition Using LowResolution Thermal Cameras and Spiking Neural Networks

2a) What type of data are you going to use? (Identify main types of information/data)


Data of secondary low-resolution thermal images as given by the authors of the reference literature will be used in this project. The data is comprised of 24 sequences of 32 pixels thermal images of varying hand gestures. There is no included personal, sensitive or identifiable human information.


2b) What procedures will you use to collect data (include all equipment/methods you plan to use)


The information is going to be downloaded through publicly accessible academic repositories, which are related to the initial publication. The data analysis will be done in Python with the help of a personal computer. Thermal image sequences will be structured into ordered directories and normalised and background modelled to preprocess them to be processed by spiking neural network and segmentation.

2c) What methods will you use to analyse this data?


The processing and sparse segmentation procedures will be performed through the spiking neural network on the data. The coding of event-driven representations of a thermal input will be synthesised via an MMV-based spiking neural network. Sparse foreground segmentation Robust Principal Component Analysis will be used. Lightweight classifiers will be used in gesture classification like SVM or kNN. Classification accuracy and computational efficiency indicators will be used as the determination of performance.

3. Data Management

The secondary thermal gesture data needed in this project is available openly to researchers to use in their scholarly studies. No personal or identifiable data will be acquired or processed. Every dataset, intermediate product and the research findings obtained will be stored in a password-protected personal computer and will serve solely academic purposes. The data will be handled as per university data management guidelines and there is no need to have any extra data management processes besides the general requirement of ethics which is present at the module level.

Introduction to the project:

Human-computer interaction Gesture recognition is a crucial aspect of human computer interaction, especially in smart environments, assistive technology and touch free interaction. Older systems of gesture recognition with a vision-based approach are highly reliant on highresolution and RGB cameras and deep learning algorithms, which are computationally costly, power-intensive, and do not run well on edge or embedded computers (Tang et al. 2023). Another type of sensing, thermal imaging, maintains the privacy of users and is not highly sensitive to lighting changes, but presents several difficulties because of low spatial resolution and lack of texture information. Most recent studies have investigated methods of gesture recognition with energy saving features of low-resolution thermal sensor with neuromorphic computing concepts. The arXiv article Resource-Efficient Gesture Recognition using Low-Resolution Thermal Camera through Spiking Neural Networks and Sparse Segmentation (2024) suggests that a lightweight and interpretable pipeline with the integration of spiking neural networks (SNNs) built on the foundation of Monostable Multivibrator (MMV) neurons and sparse segmentation through Robust Principal Component Analysis (R-PCA) can be used. This method has a major advantage as it greatly lowers the computational complexity but does not affect competitive recognition rates when compared to the deep spiking convolutional networks. This project is in line with the existing research on edge AI and neuromorphic computing, targeting the low-power, low-resolution sensing and biologically inspired models. It gives its part through carrying out and testing the suggested pipeline in a repeatable way, assessment of the performance, efficiency and appropriateness to embedded and resource-coerced scenarios (Hwang et al. 2024).

Aim of the project:

The aim of this project is to design, implement, and evaluate a resource-efficient gesture recognition system using low-resolution thermal imagery and spiking neural networks.
Objectives
● To review existing literature on thermal-based gesture recognition and spiking neural networks
● To work with low-resolution thermal gesture datasets provided by the authors
● To implement an MMV-based spiking neural network for event-driven feature extraction
● To apply sparse segmentation using Robust Principal Component Analysis (R-PCA)
● To classify gestures using lightweight classifiers such as SVM or k-NN
● To evaluate recognition accuracy and computational efficiency of the proposed system

Research Design

The research is a quantitative, experimental and design-based research. The project aims at developing a gesture recognition pipeline based on biological principles and with reduced energy usage and testing it on low-resolution thermal data (Xu et al. 2024). Procedures The research will also commence with the acquisition of low-resolution thermal gesture data as supplied by the authors of the source article. The dataset is composed of a sequence of thermal images whose spatial resolution is 24×32 pixels of the various hand gestures. Preprocessing will involve initial steps as regularising, background removal and alignment of the frame where necessary. A spiking neural network, modelled in MMV will be used to convert thermal image frames into spike-based representations. The minimum amount of foreground segmentation shall then be done using the Robust Principal Component Analysis mechanism to distinguish the gesture associated motion and the noise. The lightweight classifiers that will be used to train on the segmented output to obtain feature representations are the Support Vector Machines (SVM) or the k-Nearest neighbours (k-NN). The system will be tested on the aspects of the accuracy of gesture recognition, the computing efficiency of the system and its capability to be deployed on lower power or edge computing. The results will be examined and evaluated against the available methods that are being discussed in the literature (Gupta et al. 2022).
Resources and Equipment
Software: Python
Libraries NumPy, SciPy, scikit-learn, Matplotlib.
Hardware: Standard laptop or PC (no graphics card needed)


Novelty
The most effective and novelty that can be proposed as strong and easily differentiable to the current paper would be to go beyond a predetermined, rule-based post-processing pipeline and implement an adaptive, end-to-end learning model and maintain resource efficiency. In particular, the presentday publication uses a manually designed division between the MMV-based gesture awakening, R-PCA section and heuristic path-based classification, which, though effective, cannot be extended to more complex or unrestrained gestures. The paper can suggest some learned spatiotemporal representation of thermal gestures, which may be proposed by combining lightweight attention to time, graphs modeling of trajectories, or neuromorphic learning policies varying online to motion patterns and environment change depending on the user. Moreover, you can generalise novelty by mitigating an important drawback of the paper, namely the fact it tests one in-cabin scenario, via cross-domain generalisation and lifelong learning, allowing the geometrically robust recognition of gestures in different users, sensor locations, or ambient thermal conditions, without the need to retrain all the way up. This places the innovation as a future, high adaptive, neuromorphic thermal gesture recognition system maintaining ultra-low compute costs and memory costs, but with much greater flexibility, scalability, and deployability. In this work, the adaptive thermal gesture recognition end-to-end and Lightweight Pythonimplemented Lightweight Spatiotemporal Graph Attention Network (ST-GAT) will be employed instead of the rule-based MMV, R-PCA and heuristic-based classification pipeline that is used in the current implementation.


Reference

Tang, J., Li, G., Lin, J. and Chen, Z., 2023. Efficient spiking neural networks for resource-limited vision applications. IEEE Transactions on Neural Networks and Learning Systems, 34(2), pp.755–767.
Hwang, J., Lee, K. and Kim, D., 2024. Low-power gesture recognition using thermal imagery and lightweight CNNs for embedded devices. Pattern Recognition Letters, 175, pp.15–23.
Xu, B., Zhou, T., Cao, X. and Li, S., 2025. Sparse temporal segmentation of event-based visual data for efficient gesture understanding. IEEE Transactions on Image Processing, 34, pp.1423–1436.
Gupta, R., Singh, A. and Jain, R., 2022. Thermal camera-based human activity classification with energy-aware deep models. IEEE Sensors Journal, 22(15), pp.14697–14707.

Resource-Efficient Gesture Recognition Using LowResolution Thermal Cameras and Spiking Neural Networks

2. IR Reasoner: Real-Time Infrared Object Detection by Visual Reasoning

2a) What type of data are you going to use? (Identify main types of information/data)

This research will work with secondary thermal infrared image data, that is available publicly and per the available open datasets, the FLIR ADAS thermal data 1. This data is available in the form of infrared images that have been taken in actual driving scenarios and they contain annotated items like pedestrians, vehicles and bikes. Image-level annotations are recorded in the form of bounding boxes and the class label, which are essential in supervised object detection, in the dataset. No personal or identifiable or sensitive human information will be gathered or analysed.

2b) What procedures will you use to collect data (include all equipment/methods you plan to use)

The dataset will be downloaded via web browser or dataset hosting sites to get publicly available sources. Procedural preparation of data would be done under Python. The pictures and comments will be sorted into systematic folders suitable to YOLO trainers. Preprocessing will involve image resizing, image normalisation, label verification, and train, validation, and test splitting. The development and testing of the models will be carried out using a standard personal computer and a GPU may be used to speed up training (optionally).

2c) What methods will you use to analyse this data?

Deep learning-based object detection will be used to analyse the data. Thermal object detection will be carried out by implementing an equivalent of a baseline YOLO detector (e.g. YOLOv4 or YOLOv7). An efficient lightweight self-attention Reasoner module will then get incorporated into the stage of feature extraction to increase the spatial as well as semantic thinking at image regions. The standard object detection measures of mean Average Precision (mAP), precision, recall, inference speed (FPS), and false detection rate will be used to measure model performance. The baseline YOLO model and the improved IR Reasoner architecture will be compared in terms of their performance.

Data Management

This project utilises publicly available secondary thermal infrared image data that are useful in research including the FLIR ADAS dataset. The information in the form of infrared images and related objects annotations does not contain personal, sensitive, or personally identifying data. All the data sets and trained models will be stored in a password-protected personal computer and will be utilised in the scope of academic research. No redistribution of data will be performed and no commercial use will be seen. All resulting derived data, experimental results, and trained models files will only be stored as long as the project lasts and handled by the university data management policies. This study will not need any extra data management processes other than the standard requirements of the module level ethics. Dataset Link: https://www.kaggle.com/datasets/rthwkk/ir-object-detection-dataset

Introduction

IR object detection is a very essential task in features like autonomous driving, surveillance, and night time surveillance because in most cases the camera based on visible light does not work well because of low light or poor weather. Thermal infrared images give resilience in such conditions though object recognition in infrared images is impeding due to low texture details, low contrast, and blurred edges of the objects. Conventional computer vision methods together with standard deep learning detectors usually do not learn significant spatial and semantic associations of thermal cases (Zhang et al. 2023). Recent pinnacle developments in deep learning have transferred object detectors based on the convolutional neural network, in specific, the YOLO-family structures, to infrared images with evident outcomes. Research has revealed that by fine-tuning YOLO models to thermal data like FLIR ADAS, a real-time performance with limited accuracy of the detection can be performed due to lack of contextual thoughts. The IR Reasoner framework overcomes this shortcoming with a lightweight self-attention-based reasoning module which reinforces both spatial and semantic associations among image sections without decreasing the pace of real-time inferences (Redmon and Farhadi 2022). The proposed project aligns with the existing sources as it extends the existing literature on YOLO detectors and incorporates a visual reasoning system to support the infrared data. The paper will be valuable as it applies and tests the IR Reasoner architecture on a reproducible experimental system and the comparison of baseline and improved models and trade-off between detection precision and real-time speed in thermal object detection.

Aim of the project:

The proposed project will be dedicated to the implementation and testing of the real-time infrared object detection system that will be based on the IR Reasoner structure and will expand to a YOLO-based detector using visual reasoning to enhance the detection scheme in thermal images.
Objectives
● To perform a literature review of the existing literature on infrared object detection and thermal image analysis with YOLO.
● To develop a baseline YOLO object detector of thermal infrared images.
● To add the visual reasoning module of IR Reasoner into the YOLO extraction feature.
● To optimise and estimate the proposed model on a publicly available thermal dataset.
● To assess and compare the model at the baseline and with improvements on conventional object detection metrics.
● To examine the tradeoff between the accuracy in detection and real time inferences.


Method

Finally, a methodology section acceptable by the 24 specification of the IR Reasoner: Real-Time Infrared Object Detection by Visual Reasoning, (CVPRW 2023) project is provided as plain unformatted text, very precisely formatted to the allotment and structure required by submissions to the KF7029 Project Approval. This can be directly piped on the form.

Method

Research Design The research is a quantitative, experimental, and design-based study. The project will be aimed at implementing, extending, and testing a deep learning-based infrared object detectors model by introducing a visual reasoning module to an existing YOLO architecture. Assessment will be conducted by using controlled experiments and comparison (Bochkovskiy et al. 2020). Procedures The research will commence by choosing a publicly accessible thermal infrared dataset, including annotated objects against which to do supervised learning, like the FLIR ADAS dataset. Preprocesses in the dataset will include image resizing, normalising of the pixel values, checking the quality of annotations, and dividing the data into a training, validation, and test set. To begin with, a basic YOLO object detector model (e.g. YOLOv4 or YOLOv7) will be deployed and taught on the thermal dataset. The feature extraction part of the YOLO network will be then incorporated with the IR Reasoner self-attention module to facilitate spatial and semantic reasoning. The improved model will be then trained or fine-tuned on the same experimental conditions (Teutsch et al. 2024). Standard object detection metrics will be employed to compare the model performance in terms of mean Average Precision (mAP), accuracy, recall, and the speed of inference expressed in the number of frames per second (FPS). The findings will be evaluated to determine the increment of detection accuracy and real time performance.

Resources and Equipment

Software: Python Python Frameworks/Libraries: PyTorch, OpenCV, NumPy, Matplotlib

Novelty

According to the paper reviewed Infrared Maritime Object Detection Network With Feature Enhancement and Adjacent Fusion, an evident and justifiable innovation to this paper would be to adapt the fixed feature-enhancement and fusion model to adjustable, condition-sensitive, and general-relevant detection. In particular, whereas the current paper is concerned with handcrafted attention improvements (ICA, Dilated CBAM), and a fixed adjacent feature fusion as an activity, specific to the maritime environment, the paper by you can bring a dynamic contextadaptable mechanism that can increase or reduce the strength of feature improvement and fusion course depending on the environment of the scene (sea state, clutters density, target scale distribution or thermal contrasts). This may be done by lightweight scene-adaptive gating, transformer-based global reasoning or uncertainty-aware attention that explicitly represents background ambiguity. Moreover, a domain-resilient learning approach (such as cross-domain training, self-supervised pretraining, or physics-informed loss functions) can be suggested to enhance the viability of performance coping with maritime and generic infrared tasks, unlike the checked paper, which admits inability to do so due to limited generalisation to non-maritime data. This puts the work as not only enhancing the accuracy of the detection of low and small targets, but it is also concerned with flexibility and dependence on deployability, which is still an open restriction in literature. Scene-Adaptive Context-Aware Infrared Object Detection Network (SACAIODNet) model will be implemented.


Reference

Redmon, J. and Farhadi, A., 2022. YOLOv3: An Incremental Improvement. arXiv preprint arXiv:1804.02767. Available at: https://arxiv.org/abs/1804.02767
Bochkovskiy, A., Wang, C.Y. and Liao, H.Y.M., 2020. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv preprint arXiv:2004.10934. Available at: https://arxiv.org/abs/2004.10934

IR Reasoner Real-Time Infrared Object Detection by Visual Reasoning

3. Infrared Small Target Detection Using Local and Nonlocal Spatial Information

2a) What type of data are you going to use? (Identify main types of information/data)

In this study, the secondary infrared image data will be used based on datasets published publicly and located on Kaggle. The data will comprise of grayscale infrared (thermal) images that will include small targets in complicated background scenes that include sky, sea and land scenes. It is characterised by the nature of the images being low-contrast and noisy, and thus, is susceptible to small target detections algorithms in infrared imaging. The datasets do not contain any personal, identifiable or sensitive human data.

2b) What procedures will you use to collect data (include all equipment/methods you plan to use)

The datasets of the infrared images will be saved by simply downloading the files via a web browser or with the Kaggle API. The processing of the data will be performed in a normal personal computer in Python. The photos are going to be arranged into directories and pretreated with grayscale normalisation, noise minimisation, as well as resising when needed. The selection of dataset will be based on the scenes that have been used in previous research on the infrared small target detection to ascertain reproducibility and comparability with the other current studies.

2c) What methods will you use to analyse this data?

The analysis of the data will be done by means of classical image processing and in matrices. To improve the targets of small size, there will be the use of a Dual-Window Local Contrast Method (DW-LCM). An approach that will be used to exploit nonlocal spatial information is a Multiscale Window Infrared Patch-Image (MW-IPI) model which will be used to suppress background clutter. Target extraction Thresholding and morphological operations will be used to extract the targets. The standard infrared target detection metrics to be used in the evaluation of detection performance will include Signal-to- Clutter Ratio Gain (SCRG), Background Suppression Factor (BSF) and a rate of detection and false alarm.

Data Management

The presented project uses solely publicly available secondary datasets of infrared images on Kaggle and aimed at scholarly and research purposes. There will be no personal, sensitive or identifiable human information that will be gathered, processed and retained at any point of the research. Each data set is going to be saved in a password-friendly personal computer where it will only be used in this project in an academic context. No information is going to be distributed to third parties beyond the study. Any processed data, interim products and experimental findings will only be stored as long as the project is in progress only to be eliminated or archived according to university data management policies on exceeding the project. This study does not require any other data management processes other than the standard ethics requirements at the module levels.

Introduction

Small target detection Infrared IR Small target detection is very important in surveillance, missile warning platforms, maritime applications, and remote sensing. Infrared imagery has low signalclutter ratio, background noise and intricate thermal environments that contribute to difficulty in detecting small targets in the infrared imagery. The background clutters usually remain and the weak targets are suppressed in traditional statistical and filtering-based methods. Recent studies have actually shown that local contrast enhancement to be used with nonlocal spatial modelling may lead to a notable enhancement in detection performance. A study conducted by Infrared Small Target Detection Using Local and Nonlocal Spatial Information (2019) suggests an organised, classical image-processing pipeline, which combines the Dual-Window Local Contrast Method (DW-LCM) with Multiscale Window Infrared Patch-Image (MW-IPI) model. This is not a deep learning model, is not based on the big data, and is independent of the use of a graphics card, yet receives strong detection results. The proposed project is relevant to the existing body of literature given its emphasis on methods that are interpretable based on mathematical grounds and makes its own contribution by applying, testing as well as the analysis of the efficacy of local as well as nonlocal spatial procedures on the infrared images through reductive experiments.

Aim of the project:

Aim
The aim of this project is to implement and evaluate a classical infrared small target detection framework based on local and nonlocal spatial information.
Objectives
To study existing infrared small target detection techniques and challenges
To implement the Dual-Window Local Contrast Method for target enhancement
To implement the Multiscale Window Infrared Patch-Image model for background suppression
To integrate local and nonlocal methods into a unified detection pipeline
To evaluate detection performance using standard infrared target detection metrics
To analyse robustness under different background and noise conditions


Method

Research Design

Here, the research design being embraced is a quantitative, experimental image processing. It is an experimental research that aims at the implementation of algorithms, controlled experimentation, and performance measurement based on publicly available infrared image datasets. Data Collection The research will rely on secondary datasets of infrared images that are publicly available in Kaggle and are widely utilised in research on infrared target detection. There will be no data collected with human subjects or person-related information. Kaggle Link: https://www.kaggle.com/datasets/llpukojluct/ir-small-target Procedure Data Preparation Load IR images and transform them to grayscale intensity images. Have normalization and noise reduction where necessary. Local Contrast Enhancement Use Dual-Window Local Contrast Method (DW-LCM) in order to refine small targets on the basis of comparing between inner and outer window intensities. Nonlocal Spatial Modelling Cream patch-image representations in multiscares of infrared. Use Matrix decomposition methods of tapping into non-local correlations and canceling the background clutter. Target Extraction threshold Using thresholding, decomposing small targets using morphological post-processing. Evaluation Measure the performance of detection based on Signal-to-Clutter Ratio Gain (SCRG), Background Suppression Factor (BSF), detection probability and false alarm rate. Analysis Give consequences of varying scenes and level of clutter. Compare the weaknesses and strengths of local and nonlocal components.

Tools and Resources

Software: Python
Libraries: NumPy, Opencv, Scipy, Matplotlib.
Hardware: PC or laptop (processor is required, but need not be any faster than CPU-based processing)
Ethical Considerations No such opportunities to work with human participants, personal data, and sensitive information are engaged in this project. All data sets that are used are available publicly and have academic research purposes. There are no ethical threats in terms of privacy of data, consent, and welfare of the participants. The project conforms to the module level criteria of ethical approval in totality and does not necessitate further ethical clearance. Health and safety and Risk Assessment. The project will be computationally supportive, that is, no physical, psychological and environmental risks are involved. Everything will be done on a normal working environment. There is no requirement of face-to-face interaction or external testing environments.

Novelty

One of the most effective and visible novelties to do so in this paper would be to substitute the proposed and predetermined composure of the local and nonlocal priors by an adjustable, information-driven and time conscious detection system. Although the current work is innovative in its integrative approach to DW-LCM and MW-IPI by using manually chosen window sizes and pooling of multiplications, it is also restricted to the use of static and single-frame processing and parameters that are empirically adjusted. A learned approach to adaptive fusion can be presented in this paper, which dynamically combines local contrast and nonlocal low-rank priors based on the complexity of the scene and the dynamics of a background as well as target confidence, which does not require sensitivity when changing the parameters manually. Moreover, a spatiotemporal infrared small target detection model in which motion saliency, recurrent/transformer-based temporal aggregation is implemented by extending the method to single frames of detection to temporal consistency can deal with clutter stability throughout the frames and will greatly decrease the false alarms. This makes this contribution a new generation in the infrared tiny target detection scheme which can preserve the meaning of local-nonlocal priors whilst showing better durability, flexibility and deployability in dynamic real-world applications.

Infrared Small Target Detection Using Local and Nonlocal Spatial Information

Note: The Project ListYou can view or download.

           
                       







           
Recursive approach to the design of a parallel self-timed adder
Aging-aware reliable multiplier design with adaptive hold logic
Low Power Array Multiplier using Modified Full Adder
Fast Radix-10 Multiplication Using Redundant BCD Codes
Design Of Parallel Prefix Adders
Design & Analysis of 16 bit RISC Processor
High-Speed and Energy-Efficient Carry Skip Adder Operating Under a Wide Range of Supply Voltage Levels

A Fast Convergence Technique for Accuracy Configurable Approximate Adder Circuits
Input-Based Dynamic Reconfiguration of Approximate Arithmetic Units for Video Encoding
Approximate Adder with Hybrid Prediction and Error Compensation Technique

In-Field Test for Permanent Faults in FIFO Buffers of NOC Routers
A Novel Coding Scheme for Secure Communications in Distributed RFID Systems
A Low-Complexity Turbo Decoder Architecture for Energy-Efficient Wireless Sensor Networks
In-Field Test for Permanent Faults in FIFO Buffers of NOC RoutersFully reused VLSI architecture of fm0manchester encoding using sols technique for DSRC applications
Hybrid LUT/Multiplexer FPGA Logic Architectures
Data Encoding Techniques for Reducing Energy Consumption in Network-On-Chip
Memory-Reduced Turbo Decoding Architecture

A Novel Ternary Content Addressable Memory Design Using Reversible Logic
An area and energy-efficient FIFO design for image/video application

Fault Tolerant Parallel FFTs Using Error Correction Codes and Parseval Checks
Method to Design Single Error Correction Codes With Fast Decoding for a Sub set of Critical Bits

A low-power single-phase clock distribution Using VLSI technology
VLSI design of a digital clock using gals technique

A High-Performance FIR Filter Architecture for Fixed and Reconfigurable Applications
Design of fast FIR filter using compressor and Carry Select Adder

Design and analysis of Advanced Microcontroller bus architecture (AMBA APB) protocol in VLSI
Design of fast FIR filter using compressor and Carry Select AdderVLSI design of universal asynchronous synchronous transmitter (UART)
VLSI implementation of Serial peripheral interface SPI protocol using verilog HDL

VLSI design of vending machine using state machine for high performance
Design of high performance traffic light controller in VLSI
A method to design Reduced Instruction Set Computer using Verilog HDL
VLSI implementation of Advance encryption system for coding and decoding
Personal Health Monitoring With Wifi Based Mobile Devices
Communication between two nodes using IoT
Accessible Design To Control Home Area Networks using IoT
Security based home automation system using IoT
Display of Underground Cable Fault Distance over IoT
Industrial device control over IoT based Wifi module
IOT based robot control using android app.
GPS based asset vehicle/animal/child tracking system
Advance accident control system design by the help of Google map
Location-Aware and Safer Cards: Enhancing RFID Security and Privacy via Location Sensing
Electric Vehicle Stability Control Based on Disturbance Using GPS
Live bus stops and modern buses technology
Fingerprint based home security system
Remote-Control System of High Efficiency and Intelligent Street Lighting Using a ZIGBEE Network of Devices and Sensors
Keypad Based Advanced Home Automation System For Next Generation Apartments Using ZIGBEE
Keypad Based Wheel Chair For Handicapped Persons
DTMF Based Home Navigation System For The Elderly And The Physically Challenged
DTMF Based robot control system
GPS Based Border Alert System For Fishermen
Water Environment Monitoring System Based on ZIGBEE/Bluetooth Technology
Advanced Smart Card Based Power Access Control System Using Microcontroller
Design And Construction Of Radio Frequency Identification (RFID) Vehicle Recognition
Employee Id And Attendance Maintenance Using Smart Card
Metal Detection Robot Control Using RFID
Bus number announcement and identification for blind people using RFID
RFID based home automation system
Application Of Radio Frequency Controlled Intelligent Military Robot In Defense
Cell Phone Operated Land Rover Robot
Home Automation Using ZIGBEE
Development Of Gas Leak Detection And Location System Based On Wireless Technology
GPS And GSM Based Real-Time Human Health Monitoring And Alert System For Cardiac Patients
Location Finding for Blind People Using Voice Navigation Stick
Remote Controlled Home Automation Using Bluetooth
DTMF Based Industrial Parameter Controlling System
Pc Based hi-tech Home Implementation
Pc Based Stepper Motor Speed And Direction Control
GSM And GPS Based Live Human Detection With IR Sensor
DC Motor Speed And Direction Control Using RF With PC
Industrial Devices Controlling System Using Bluetooth

More projects uploading soon

           
1. Direct Torque Control for Doubly Fed Induction Machine-Based Wind Turbines under Voltage Dips and Without Crowbar Protection.
2. Single-Phase to Three-Phase Drive System Using Two Parallel Single-Phase Rectifiers.
3. An Inrush Mitigation Technique of Load Transformers for the Series Voltage Sag Compensator.
4. Instantaneous Power Control of D-STATCOM with Consideration of Power Factor Correction.
5. Wind Farm to Weak-Grid Connection using UPQC Custom Power Device.
6. An Improved UPFC Control for Oscillation Damping.
7. Constant Power Control and Fault-Ride-Through Enhancement of DFIG Wind Turbines with Energy Storage.
8. Modeling of Multi-Terminal VSC HVDC Systems with Distributed DC Voltage Control.
9. Grid Interconnection of Renewable Energy Sources at the Distribution Level With Power-Quality Improvement Features.
10. Distributed FACTS—A New Concept for Realizing Grid Power Flow Control.
11. A Novel Intelligence Control Scheme For D-Statcom Using H-Bridge Multilevel Inverter.

RESEARCH & DEVELOPMENT AREAS    

Mechanical & Electronics Projects

• Gripper
• Geared
• Hydraulic
• Pneumatic
• Electromagnetic

• Drilling Robots
• Welding Robots
• Measuring / Scaling Robots
• Load carrier Robots

• Door Locking
• Temperature Control
• Gas/Smoke Detection
• Water Overflow
• Wi-Fi Controlled internal Switches / Switch Boards

• Machine Components Design & Structural Analysis
• Automotive Design & Structural Analysis

Civil Projects

• Truss Design and Structural Analysis
• Drainage Management, Flow, Construction, Feasibility Analysis
• Ring Road Planning and Construction
• Urban Planning and Construction for future needs
• Best Practices in Road Design, Construction with Reliable Road building Materials
• Rain Water Collection and Storage for optimum usage
• Water Management Networks-Planning / Construction and analysis

                       

                       

More Projects uploading Soon