Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,029)

Search Parameters:
Keywords = space camera

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 13165 KiB  
Article
Deep BiLSTM Attention Model for Spatial and Temporal Anomaly Detection in Video Surveillance
by Sarfaraz Natha, Fareed Ahmed, Mohammad Siraj, Mehwish Lagari, Majid Altamimi and Asghar Ali Chandio
Sensors 2025, 25(1), 251; https://doi.org/10.3390/s25010251 (registering DOI) - 4 Jan 2025
Viewed by 332
Abstract
Detection of anomalies in video surveillance plays a key role in ensuring the safety and security of public spaces. The number of surveillance cameras is growing, making it harder to monitor them manually. So, automated systems are needed. This change increases the demand [...] Read more.
Detection of anomalies in video surveillance plays a key role in ensuring the safety and security of public spaces. The number of surveillance cameras is growing, making it harder to monitor them manually. So, automated systems are needed. This change increases the demand for automated systems that detect abnormal events or anomalies, such as road accidents, fighting, snatching, car fires, and explosions in real-time. These systems improve detection accuracy, minimize human error, and make security operations more efficient. In this study, we proposed the Composite Recurrent Bi-Attention (CRBA) model for detecting anomalies in surveillance videos. The CRBA model combines DenseNet201 for robust spatial feature extraction with BiLSTM networks that capture temporal dependencies across video frames. A multi-attention mechanism was also incorporated to direct the model’s focus to critical spatiotemporal regions. This improves the system’s ability to distinguish between normal and abnormal behaviors. By integrating these methodologies, the CRBA model improves the detection and classification of anomalies in surveillance videos, effectively addressing both spatial and temporal challenges. Experimental assessments demonstrate that the CRBA model achieves high accuracy on both the University of Central Florida (UCF) and the newly developed Road Anomaly Dataset (RAD). This model enhances detection accuracy while also improving resource efficiency and minimizing response times in critical situations. These advantages make it an invaluable tool for public safety and security operations, where rapid and accurate responses are needed for maintaining safety. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 8290 KiB  
Article
Multi-Scale Contrastive Learning with Hierarchical Knowledge Synergy for Visible-Infrared Person Re-Identification
by Yongheng Qian and Su-Kit Tang
Sensors 2025, 25(1), 192; https://doi.org/10.3390/s25010192 - 1 Jan 2025
Viewed by 311
Abstract
Visible-infrared person re-identification (VI-ReID) is a challenging cross-modality retrieval task to match a person across different spectral camera views. Most existing works focus on learning shared feature representations from the final embedding space of advanced networks to alleviate modality differences between visible and [...] Read more.
Visible-infrared person re-identification (VI-ReID) is a challenging cross-modality retrieval task to match a person across different spectral camera views. Most existing works focus on learning shared feature representations from the final embedding space of advanced networks to alleviate modality differences between visible and infrared images. However, exclusively relying on high-level semantic information from the network’s final layers can restrict shared feature representations and overlook the benefits of low-level details. Different from these methods, we propose a multi-scale contrastive learning network (MCLNet) with hierarchical knowledge synergy for VI-ReID. MCLNet is a novel two-stream contrastive deep supervision framework designed to train low-level details and high-level semantic representations simultaneously. MCLNet utilizes supervised contrastive learning (SCL) at each intermediate layer to strengthen visual representations and enhance cross-modality feature learning. Furthermore, a hierarchical knowledge synergy (HKS) strategy for pairwise knowledge matching promotes explicit information interaction across multi-scale features and improves information consistency. Extensive experiments on three benchmarks demonstrate the effectiveness of MCLNet. Full article
Show Figures

Figure 1

20 pages, 2696 KiB  
Article
See-Then-Grasp: Object Full 3D Reconstruction via Two-Stage Active Robotic Reconstruction Using Single Manipulator
by Youngtaek Hong, Jonghyeon Kim, Geonho Cha, Eunwoo Kim and Kyungjae Lee
Appl. Sci. 2025, 15(1), 272; https://doi.org/10.3390/app15010272 - 30 Dec 2024
Viewed by 412
Abstract
In this paper, we propose an active robotic 3D reconstruction methodology for achieving full object 3D reconstruction. Existing robotic 3D reconstruction approaches often struggle to cover the entire view space of the object or reconstruct occluded regions, such as the bottom or back [...] Read more.
In this paper, we propose an active robotic 3D reconstruction methodology for achieving full object 3D reconstruction. Existing robotic 3D reconstruction approaches often struggle to cover the entire view space of the object or reconstruct occluded regions, such as the bottom or back side. To address these limitations, we introduce a two-stage robotic active 3D reconstruction pipeline, named See-Then-Grasp (STG), that employs a robot manipulator for direct interaction with the object. The manipulator moves toward the points with the highest uncertainty, ensuring efficient data acquisition and rapid reconstruction. Our method expands the view space of the object to include the entire perspective, including occluded areas, making the previous fixed view candidate approach time-consuming for identifying uncertain regions. To overcome this, we propose a gradient-based next best view pose optimization method that efficiently identifies uncertain regions, enabling faster and more effective reconstruction. Our method optimizes the camera pose based on an uncertainty function, allowing it to identify the most uncertain regions in a short time. Through experiments with synthetic objects, we demonstrate that our approach effectively addresses the next best view selection problem, achieving significant improvements in computational efficiency while maintaining high-quality 3D reconstruction. Furthermore, we validate our method on a real robot, showing that it enables full 3D reconstruction of real-world objects. Full article
(This article belongs to the Special Issue Advances in Robotics and Autonomous Systems)
Show Figures

Figure 1

12 pages, 2105 KiB  
Article
An Automated Marker-Less Registration Approach Using Neural Radiance Fields for Potential Use in Mixed Reality-Based Computer-Aided Surgical Navigation of Paranasal Sinus
by Suhyeon Kim, Hyeonji Kim and Younhyun Jung
Viewed by 239
Abstract
Paranasal sinus surgery, a common treatment for chronic rhinosinusitis, requires exceptional precision due to the proximity of critical anatomical structures. To ensure accurate instrument control and clear visualization of the surgical site, surgeons utilize computer-aided surgical navigation (CSN). A key component of CSN [...] Read more.
Paranasal sinus surgery, a common treatment for chronic rhinosinusitis, requires exceptional precision due to the proximity of critical anatomical structures. To ensure accurate instrument control and clear visualization of the surgical site, surgeons utilize computer-aided surgical navigation (CSN). A key component of CSN is the registration process, which is traditionally reliant on manual or marker-based techniques. However, there is a growing shift toward marker-less registration methods. In previous work, we investigated a mesh-based registration approach using a Mixed Reality Head-Mounted Display (MR-HMD), specifically the Microsoft HoloLens 2. However, this method faced limitations, including depth holes and invalid values. These issues stemmed from the device’s low-resolution camera specifications and the 3D projection steps required to upscale RGB camera spaces. In this study, we propose a novel automated marker-less registration method leveraging Neural Radiance Field (NeRF) technology with an MR-HMD. To address insufficient depth information in the previous approach, we utilize rendered-depth images generated by the trained NeRF model. We evaluated our method against two other techniques, including prior mesh-based registration, using a facial phantom and three participants. The results demonstrate our proposed method achieves at least a 0.873 mm (12%) improvement in registration accuracy compared to others. Full article
Show Figures

Figure 1

13 pages, 5669 KiB  
Article
Optimization of Video Surveillance System Deployment Based on Space Syntax and Deep Reinforcement Learning
by Bingchan Li and Chunguo Li
Viewed by 223
Abstract
With the widespread deployment of video surveillance devices, a large number of indoor and outdoor places are under the coverage of cameras, which plays a significant role in enhancing regional safety management and hazard detection. However, a vast number of cameras lead to [...] Read more.
With the widespread deployment of video surveillance devices, a large number of indoor and outdoor places are under the coverage of cameras, which plays a significant role in enhancing regional safety management and hazard detection. However, a vast number of cameras lead to high installation, maintenance, and analysis costs. At the same time, low-quality images and potential blind spots in key areas prevent the full utilization of the video system’s effectiveness. This paper proposes an optimization method for video surveillance system deployment based on space syntax analysis and deep reinforcement learning. First, space syntax is used to calculate the connectivity value, control value, depth value, and integration of the surveillance area. Combined with visibility and axial analysis results, a weighted index grid map of the area’s surveillance importance is constructed. This index describes the importance of video coverage at a given point in the area. Based on this index map, a deep reinforcement learning network based on DQN (Deep Q-Network) is proposed to optimize the best placement positions and angles for a given number of cameras in the area. Experiments show that the proposed framework, integrating space syntax and deep reinforcement learning, effectively improves video system coverage efficiency and allows for quick adjustment and refinement of camera placement by manually setting parameters for specific areas. Compared to existing coverage-first or experience-based optimization, the proposed method demonstrates significant performance and efficiency advantages. Full article
(This article belongs to the Special Issue Advances in Data-Driven Artificial Intelligence)
Show Figures

Figure 1

17 pages, 4607 KiB  
Article
Event-Based Visual/Inertial Odometry for UAV Indoor Navigation
by Ahmed Elamin, Ahmed El-Rabbany and Sunil Jacob
Sensors 2025, 25(1), 61; https://doi.org/10.3390/s25010061 - 25 Dec 2024
Viewed by 277
Abstract
Indoor navigation is becoming increasingly essential for multiple applications. It is complex and challenging due to dynamic scenes, limited space, and, more importantly, the unavailability of global navigation satellite system (GNSS) signals. Recently, new sensors have emerged, namely event cameras, which show great [...] Read more.
Indoor navigation is becoming increasingly essential for multiple applications. It is complex and challenging due to dynamic scenes, limited space, and, more importantly, the unavailability of global navigation satellite system (GNSS) signals. Recently, new sensors have emerged, namely event cameras, which show great potential for indoor navigation due to their high dynamic range and low latency. In this study, an event-based visual–inertial odometry approach is proposed, emphasizing adaptive event accumulation and selective keyframe updates to reduce computational overhead. The proposed approach fuses events, standard frames, and inertial measurements for precise indoor navigation. Features are detected and tracked on the standard images. The events are accumulated into frames and used to track the features between the standard frames. Subsequently, the IMU measurements and the feature tracks are fused to continuously estimate the sensor states. The proposed approach is evaluated using both simulated and real-world datasets. Compared with the state-of-the-art U-SLAM algorithm, our approach achieves a substantial reduction in the mean positional error and RMSE in simulated environments, showing up to 50% and 47% reductions along the x- and y-axes, respectively. The approach achieves 5–10 ms latency per event batch and 10–20 ms for frame updates, demonstrating real-time performance on resource-constrained platforms. These results underscore the potential of our approach as a robust solution for real-world UAV indoor navigation scenarios. Full article
(This article belongs to the Special Issue Multi-sensor Integration for Navigation and Environmental Sensing)
Show Figures

Figure 1

10 pages, 1474 KiB  
Communication
Comparative Analysis of Low-Cost Portable Spectrophotometers for Colorimetric Accuracy on the RAL Design System Plus Color Calibration Target
by Jaša Samec, Eva Štruc, Inese Berzina, Peter Naglič and Blaž Cugmas
Sensors 2024, 24(24), 8208; https://doi.org/10.3390/s24248208 - 23 Dec 2024
Viewed by 283
Abstract
Novel low-cost portable spectrophotometers could be an alternative to traditional spectrophotometers and calibrated RGB cameras by offering lower prices and convenient measurements but retaining high colorimetric accuracy. This study evaluated the colorimetric accuracy of low-cost, portable spectrophotometers on the established color calibration target—RAL [...] Read more.
Novel low-cost portable spectrophotometers could be an alternative to traditional spectrophotometers and calibrated RGB cameras by offering lower prices and convenient measurements but retaining high colorimetric accuracy. This study evaluated the colorimetric accuracy of low-cost, portable spectrophotometers on the established color calibration target—RAL Design System Plus (RAL+). Four spectrophotometers with a listed price between USD 100–1200 (Nix Spectro 2, Spectro 1 Pro, ColorReader, and Pico) and a smartphone RGB camera were tested on a representative subset of 183 RAL+ colors. Key performance metrics included the devices’ ability to match and measure RAL+ colors in the CIELAB color space using the color difference CIEDE2000 ΔE. The results showed that Nix Spectro 2 had the best performance, matching 99% of RAL+ colors with an estimated ΔE of 0.5–1.05. Spectro 1 Pro and ColorReader matched approximately 85% of colors with ΔE values between 1.07 and 1.39, while Pico and the Asus 8 smartphone matched 54–77% of colors, with ΔE of around 1.85. Our findings showed that low-cost, portable spectrophotometers offered excellent colorimetric measurements. They mostly outperformed existing RGB camera-based colorimetric systems, making them valuable tools in science and industry. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Color and Spectral Sensors: 2nd Edition)
Show Figures

Figure 1

26 pages, 6416 KiB  
Article
Advanced Monocular Outdoor Pose Estimation in Autonomous Systems: Leveraging Optical Flow, Depth Estimation, and Semantic Segmentation with Dynamic Object Removal
by Alireza Ghasemieh and Rasha Kashef
Sensors 2024, 24(24), 8040; https://doi.org/10.3390/s24248040 - 17 Dec 2024
Viewed by 409
Abstract
Autonomous technologies have revolutionized transportation, military operations, and space exploration, necessitating precise localization in environments where traditional GPS-based systems are unreliable or unavailable. While widespread for outdoor localization, GPS systems face limitations in obstructed environments such as dense urban areas, forests, and indoor [...] Read more.
Autonomous technologies have revolutionized transportation, military operations, and space exploration, necessitating precise localization in environments where traditional GPS-based systems are unreliable or unavailable. While widespread for outdoor localization, GPS systems face limitations in obstructed environments such as dense urban areas, forests, and indoor spaces. Moreover, GPS reliance introduces vulnerabilities to signal disruptions, which can lead to significant operational failures. Hence, developing alternative localization techniques that do not depend on external signals is essential, showing a critical need for robust, GPS-independent localization solutions adaptable to different applications, ranging from Earth-based autonomous vehicles to robotic missions on Mars. This paper addresses these challenges using Visual odometry (VO) to estimate a camera’s pose by analyzing captured image sequences in GPS-denied areas tailored for autonomous vehicles (AVs), where safety and real-time decision-making are paramount. Extensive research has been dedicated to pose estimation using LiDAR or stereo cameras, which, despite their accuracy, are constrained by weight, cost, and complexity. In contrast, monocular vision is practical and cost-effective, making it a popular choice for drones, cars, and autonomous vehicles. However, robust and reliable monocular pose estimation models remain underexplored. This research aims to fill this gap by developing a novel adaptive framework for outdoor pose estimation and safe navigation using enhanced visual odometry systems with monocular cameras, especially for applications where deploying additional sensors is not feasible due to cost or physical constraints. This framework is designed to be adaptable across different vehicles and platforms, ensuring accurate and reliable pose estimation. We integrate advanced control theory to provide safety guarantees for motion control, ensuring that the AV can react safely to the imminent hazards and unknown trajectories of nearby traffic agents. The focus is on creating an AI-driven model(s) that meets the performance standards of multi-sensor systems while leveraging the inherent advantages of monocular vision. This research uses state-of-the-art machine learning techniques to advance visual odometry’s technical capabilities and ensure its adaptability across different platforms, cameras, and environments. By merging cutting-edge visual odometry techniques with robust control theory, our approach enhances both the safety and performance of AVs in complex traffic situations, directly addressing the challenge of safe and adaptive navigation. Experimental results on the KITTI odometry dataset demonstrate a significant improvement in pose estimation accuracy, offering a cost-effective and robust solution for real-world applications. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Pose Estimation, and 3D Reconstruction)
Show Figures

Figure 1

18 pages, 4832 KiB  
Article
An Inter-Method Comparison of Drones, Side-Scan Sonar, Airplanes, and Satellites Used for Eelgrass (Zostera marina) Mapping and Management
by Jillian Carr and Todd Callaghan
Geosciences 2024, 14(12), 345; https://doi.org/10.3390/geosciences14120345 - 17 Dec 2024
Viewed by 594
Abstract
Remote sensing is heavily relied upon where eelgrass maps are needed for tracking trends, project siting and permitting, water quality assessments, and restoration planning. However, there is only a moderate degree of confidence in the accuracy of maps derived from remote sensing, thus [...] Read more.
Remote sensing is heavily relied upon where eelgrass maps are needed for tracking trends, project siting and permitting, water quality assessments, and restoration planning. However, there is only a moderate degree of confidence in the accuracy of maps derived from remote sensing, thus risking inadequate resource protection. In this study, semi-synchronous drone, side-scan sonar, airplane, and satellite missions were conducted at five Massachusetts eelgrass meadows to assess each method’s edge-detection capability and mapping accuracy. To ground-truth the remote sensing surveys, SCUBA divers surveyed the meadow along transects perpendicular to shore to locate the last shoot (i.e., meadow’s edge) and sampled quadrat locations along the transect for percent cover, canopy height, and meadow patchiness. In addition, drop frame underwater camera surveys were conducted to assess the accuracy of each remote sensing survey. Eelgrass meadow delineations derived from each remote sensing method were compared to ground-truthing data to address the following study objectives: (1) determine if and how much eelgrass was missed during manual photointerpretation of the imagery from each remote sensing method, (2) assess map accuracy, as well as the effects of eelgrass percent cover, canopy height, and meadow patchiness on method performance, and (3) make management recommendations regarding the use of remote sensing data for eelgrass mapping. Results showed that all remote sensing methods were associated with the underestimation of eelgrass. At the shallow edge, mean edge detection error was lowest for drone imagery (11.2 m) and increased with decreasing image resolution, up to 38.5 m for satellite imagery. At the deep edge, mean edge detection error varied by survey method but ranged from 72 to 106 m. Maximum edge detection errors across all sites and depths for each survey method were 112.4 m, 121.4 m, 121.7 m, and 106.7 m for drone, sonar, airplane, and satellite data, respectively. The overall accuracy of eelgrass delineations across the survey methods ranged from 76–89% and corresponded with image resolution, where drones performed best, followed by sonar, airplanes, and satellites; however, there was a high degree of site variability. Accuracy at the shallow edge was greater than at the deep edge across all survey types except for satellite, where accuracy was the same at both depths. Accuracy was influenced by eelgrass percent cover, canopy height, and meadow patchiness. Low eelgrass density (i.e., 1–10% cover), patchy eelgrass (i.e., shoots or patches spaced > 5 m) and shorter canopy height (i.e., <22 cm) were associated with reduced accuracy across all methods; however, drones performed best across all scenarios. Management recommendations include applying regulatory buffers to eelgrass maps derived from remote sensing in order to protect meadow edge areas from human disturbances, the prioritization of using SCUBA and high-resolution platforms like drones and sonar for eelgrass mapping, and for existing mapping programs to allocate more resources to ground-truthing along meadow edges. Full article
(This article belongs to the Special Issue Progress in Seafloor Mapping)
Show Figures

Figure 1

20 pages, 7839 KiB  
Article
Normalized Difference Vegetation Index Prediction for Blueberry Plant Health from RGB Images: A Clustering and Deep Learning Approach
by A. G. M. Zaman, Kallol Roy and Jüri Olt
AgriEngineering 2024, 6(4), 4831-4850; https://doi.org/10.3390/agriengineering6040276 - 16 Dec 2024
Viewed by 591
Abstract
In precision agriculture (PA), monitoring individual plant health is crucial for optimizing yields and minimizing resources. The normalized difference vegetation index (NDVI), a widely used health indicator, typically relies on expensive multispectral cameras. This study introduces a method for predicting the NDVI of [...] Read more.
In precision agriculture (PA), monitoring individual plant health is crucial for optimizing yields and minimizing resources. The normalized difference vegetation index (NDVI), a widely used health indicator, typically relies on expensive multispectral cameras. This study introduces a method for predicting the NDVI of blueberry plants using RGB images and deep learning, offering a cost-effective alternative. To identify individual plant bushes, K-means and Gaussian Mixture Model (GMM) clustering were applied. RGB images were transformed into the HSL (hue, saturation, lightness) color space, and the hue channel was constrained using percentiles to exclude extreme values while preserving relevant plant hues. Further refinement was achieved through adaptive pixel-to-pixel distance filtering combined with the Davies–Bouldin Index (DBI) to eliminate pixels deviating from the compact cluster structure. This enhanced clustering accuracy and enabled precise NDVI calculations. A convolutional neural network (CNN) was trained and tested to predict NDVI-based health indices. The model achieved strong performance with mean squared losses of 0.0074, 0.0044, and 0.0021 for training, validation, and test datasets, respectively. The test dataset also yielded a mean absolute error of 0.0369 and a mean percentage error of 4.5851. These results demonstrate the NDVI prediction method’s potential for cost-effective, real-time plant health assessment, particularly in agrobotics. Full article
Show Figures

Figure 1

18 pages, 2138 KiB  
Article
Realisation of an Application Specific Multispectral Snapshot-Imaging System Based on Multi-Aperture-Technology and Multispectral Machine Learning Loops
by Lennard Wunsch, Martin Hubold, Rico Nestler and Gunther Notni
Sensors 2024, 24(24), 7984; https://doi.org/10.3390/s24247984 - 14 Dec 2024
Viewed by 402
Abstract
Multispectral imaging (MSI) enables the acquisition of spatial and spectral image-based information in one process. Spectral scene information can be used to determine the characteristics of materials based on reflection or absorption and thus their material compositions. This work focuses on so-called multi [...] Read more.
Multispectral imaging (MSI) enables the acquisition of spatial and spectral image-based information in one process. Spectral scene information can be used to determine the characteristics of materials based on reflection or absorption and thus their material compositions. This work focuses on so-called multi aperture imaging, which enables a simultaneous capture (snapshot) of spectrally selective and spatially resolved scene information. There are some limiting factors for the spectral resolution when implementing this imaging principle, e.g., usable sensor resolutions and area, and required spatial scene resolution or optical complexity. Careful analysis is therefore needed for the specification of the multispectral system properties and its realisation. In this work we present a systematic approach for the application-related implementation of this kind of MSI. We focus on spectral system modeling, data analysis, and machine learning to build a universally usable multispectral loop to find the best sensor configuration. The approach presented is demonstrated and tested on the classification of waste, a typical application for multispectral imaging. Full article
Show Figures

Figure 1

15 pages, 17109 KiB  
Article
Investigations on the Performance of a 5 mm CdTe Timepix3 Detector for Compton Imaging Applications
by Juan S. Useche Parra, Gerardo Roque, Michael K. Schütz, Michael Fiederle and Simon Procz
Sensors 2024, 24(24), 7974; https://doi.org/10.3390/s24247974 - 13 Dec 2024
Viewed by 421
Abstract
Nuclear power plant decommissioning requires the rapid and accurate classification of radioactive waste in narrow spaces and under time constraints. Photon-counting detector technology offers an effective solution for the quick classification and detection of radioactive hotspots in a decommissioning environment. This paper characterizes [...] Read more.
Nuclear power plant decommissioning requires the rapid and accurate classification of radioactive waste in narrow spaces and under time constraints. Photon-counting detector technology offers an effective solution for the quick classification and detection of radioactive hotspots in a decommissioning environment. This paper characterizes a 5 mm CdTe Timepix3 detector and evaluates its feasibility as a single-layer Compton camera. The sensor’s electron mobility–lifetime product and resistivity are studied across bias voltages ranging from −100 V to −3000 V, obtaining values of μeτe = (1.2 ± 0.1) × 10−3 cm2V−1, and two linear regions with resistivities of ρI=(5.8±0.2) GΩ cm and ρII=(4.1±0.1) GΩ cm. Additionally, two calibration methodologies are assessed to determine the most suitable for Compton applications, achieving an energy resolution of 16.3 keV for the 137Cs photopeak. The electron’s drift time in the sensor is estimated to be (122.3 ± 7.4) ns using cosmic muons. Finally, a Compton reconstruction of two simultaneous point-like sources is performed, demonstrating the detector’s capability to accurately locate radiation hotspots with a ∼51 cm resolution. Full article
(This article belongs to the Special Issue Recent Advances in X-Ray Sensing and Imaging)
Show Figures

Figure 1

6 pages, 1835 KiB  
Proceeding Paper
Innovative Cone Clustering and Path Planning for Autonomous Formula Student Race Cars Using Cameras
by Balázs Szőnyi and Gergő Ignéczi
Eng. Proc. 2024, 79(1), 96; https://doi.org/10.3390/engproc2024079096 - 11 Dec 2024
Viewed by 220
Abstract
In this research, we present a novel approach for cone clustering, path planning, and path visualization in autonomous Formula Student race cars, utilizing the YOLOv8 model and a ZED 2 camera, executed on a Jetson Orin computer. Our system first identifies and then [...] Read more.
In this research, we present a novel approach for cone clustering, path planning, and path visualization in autonomous Formula Student race cars, utilizing the YOLOv8 model and a ZED 2 camera, executed on a Jetson Orin computer. Our system first identifies and then deprojects the positions of cones in space, employing an advanced clustering mechanism to generate midpoints and draw connecting lines. In previous clustering algorithms, cones were stored separately by color and connected based on relevance to create the lane edges. However, our proposed solution adopts a fundamentally different approach. Cones on the left and right sides within a dynamically changing maximum and minimum distance are connected by a central line, and the midpoint of this line is marked distinctly. Cones connected in this manner are then linked by their positions to form the edges of the track. The midpoints on these central lines are displayed as markers, facilitating the visualization of the optimal path. In our research, we also cover the analysis of the clustering algorithm on global maps. The implementation utilizes the ROS 2 framework for real-time data handling and visualization. Our results demonstrate the system’s efficiency in dynamic environments, highlighting potential advancements in the field of autonomous racing. The limitation of our approach is the dependency on precise cone detection and classification, which may be affected by environmental factors such as lighting and cone positioning. Full article
(This article belongs to the Proceedings of The Sustainable Mobility and Transportation Symposium 2024)
Show Figures

Figure 1

23 pages, 2544 KiB  
Article
Hybrid Artificial-Intelligence-Based System for Unmanned Aerial Vehicle Detection, Localization, and Tracking Using Software-Defined Radio and Computer Vision Techniques
by Pablo López-Muñoz, Luis Gimeno San Frutos, Christian Abarca, Francisco José Alegre, Jose Luis Calle and Jose F. Monserrat
Telecom 2024, 5(4), 1286-1308; https://doi.org/10.3390/telecom5040064 - 11 Dec 2024
Viewed by 683
Abstract
The proliferation of drones in civilian environments has raised growing concerns about their misuse, highlighting the need to develop efficient detection systems to protect public and private spaces. This article presents a hybrid approach for UAV detection that combines two artificial-intelligence-based methods to [...] Read more.
The proliferation of drones in civilian environments has raised growing concerns about their misuse, highlighting the need to develop efficient detection systems to protect public and private spaces. This article presents a hybrid approach for UAV detection that combines two artificial-intelligence-based methods to improve system accuracy. The first method uses a software-defined radio (SDR) to analyze the radio spectrum, employing autoencoders to detect drone control signals and identify the presence of these devices. The second method is a computer vision module consisting of fixed cameras and a PTZ camera, which uses the YOLOv10 object detection algorithm to identify UAVs in real time from video sequences. Additionally, this module integrates a localization and tracking algorithm, allowing the tracking of the intruding UAV’s position. Experimental results demonstrate high detection accuracy, a significant reduction in false positives for both methods, and remarkable effectiveness in UAV localization and tracking with the PTZ camera. These findings position the proposed system as a promising solution for security applications. Full article
Show Figures

Figure 1

13 pages, 616 KiB  
Article
Dose–Response Curve in REMA Test: Determination from Smartphone-Based Pictures
by Eugene B. Postnikov, Alexander V. Sychev and Anastasia I. Lavrova
Analytica 2024, 5(4), 619-631; https://doi.org/10.3390/analytica5040041 - 10 Dec 2024
Viewed by 456
Abstract
We report a workflow and a software description for digital image colorimetry aimed at obtaining a quantitative dose–response curve and the minimal inhibitory concentration in the Resazurin Microtiter Assay (REMA) test of the activity of antimycobacterial drugs. The principle of this analysis is [...] Read more.
We report a workflow and a software description for digital image colorimetry aimed at obtaining a quantitative dose–response curve and the minimal inhibitory concentration in the Resazurin Microtiter Assay (REMA) test of the activity of antimycobacterial drugs. The principle of this analysis is based on the newly established correspondence between the intensity of the a* channel of the CIE L*a*b* colour space and the concentration of resorufin produced in the course of this test. The whole procedure can be carried out using free software. It has sufficiently mild requirements for the quality of colour images, which can be taken by a typical smartphone camera. Thus, the approach does not impose additional costs on the medical examination points and is widely accessible. Its efficiency is verified by applying it to the case of two representatives of substituted 2-(quinolin-4-yl) imidazolines. The direct comparison with the data on the indicator’s fluorescence obtained using a commercial microplate reader argues that the proposed approach provides results of the same range of accuracy on the quantitative level. As a result, it would be possible to apply the strategy not only for new low-cost studies but also for expanding databases on drug candidates by quantitatively reprocessing existing data, which were earlier documented by images of microplates but analysed only qualitatively. Full article
Show Figures

Graphical abstract

Back to TopTop