Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (54)

Search Parameters:
Keywords = laser range finder

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3173 KiB  
Article
Methods for Assessing the Effectiveness of Modern Counter Unmanned Aircraft Systems
by Konrad D. Brewczyński, Marek Życzkowski, Krzysztof Cichulski, Kamil A. Kamiński, Paraskevi Petsioti and Geert De Cubber
Remote Sens. 2024, 16(19), 3714; https://doi.org/10.3390/rs16193714 - 6 Oct 2024
Viewed by 1884
Abstract
Given the growing threat posed by the widespread availability of unmanned aircraft systems (UASs), which can be utilised for various unlawful activities, the need for a standardised method to evaluate the effectiveness of systems capable of detecting, tracking, and identifying (DTI) these devices [...] Read more.
Given the growing threat posed by the widespread availability of unmanned aircraft systems (UASs), which can be utilised for various unlawful activities, the need for a standardised method to evaluate the effectiveness of systems capable of detecting, tracking, and identifying (DTI) these devices has become increasingly urgent. This article draws upon research conducted under the European project COURAGEOUS, where 260 existing drone detection systems were analysed, and a methodology was developed for assessing the suitability of C-UASs in relation to specific threat scenarios. The article provides an overview of the most commonly employed technologies in C-UASs, such as radars, visible light cameras, thermal imaging cameras, laser range finders (lidars), and acoustic sensors. It explores the advantages and limitations of each technology, highlighting their reliance on different physical principles, and also briefly touches upon the legal implications associated with their deployment. The article presents the research framework and provides a structural description, alongside the functional and performance requirements, as well as the defined metrics. Furthermore, the methodology for testing the usability and effectiveness of individual C-UAS technologies in addressing specific threat scenarios is elaborated. Lastly, the article offers a concise list of prospective research directions concerning the analysis and evaluation of these technologies. Full article
(This article belongs to the Special Issue Drone Remote Sensing II)
Show Figures

Figure 1

20 pages, 4749 KiB  
Article
Crack Width Recognition of Tunnel Tube Sheet Based on YOLOv8 Algorithm and 3D Imaging
by Xunqian Xu, Qi Li, Shue Li, Fengyi Kang, Guozhi Wan, Tao Wu and Siwen Wang
Buildings 2024, 14(2), 531; https://doi.org/10.3390/buildings14020531 - 16 Feb 2024
Cited by 3 | Viewed by 1499
Abstract
Based on the tunnel crack width identification, there are operating time constraints, limited operating space, high equipment testing costs, and other issues. In this paper, a large subway tunnel is a research object, and the tunnel rail inspection car is an operating platform [...] Read more.
Based on the tunnel crack width identification, there are operating time constraints, limited operating space, high equipment testing costs, and other issues. In this paper, a large subway tunnel is a research object, and the tunnel rail inspection car is an operating platform equipped with industrial cameras in order to meet the requirements of the tunnel tube sheet crack width recognition of more than 0.2 mm, with the measuring instrument to verify that the tunnel rail inspection car in the state of uniform motion camera imaging quality has the reliability through the addition of laser rangefinders, the accurate measurement of the object distance and the calculation of the imaging plane and the angle of the plane to be measured, to amend the three-dimensional cracks. The pixel resolution of the image is corrected, the images imaged by the industrial camera are preprocessed, the YOLOv8 algorithm is used for the intelligent extraction of crack morphology, and finally, the actual width is calculated from the spacing between two points of the crack. The crack detection width obtained by image processing using the YOLOv8 algorithm is basically the same as the value of crack width obtained by manual detection, and the error rate of crack width detection ranges from 0% to 11%, with the average error rate remaining below 4%. Compared with the crack detection error rate of the Support Vector Machine (SVM), the crack extraction model is reduced by 1%, so using the tunnel inspection vehicle as a platform equipped with an industrial camera, YOLOv8 is used to realize the recognition of the shape and width of the cracks on the surface of the tunnel tube sheet to meet the requirements of a higher degree of accuracy. The number of pixels and the detection error rate are inversely proportional to each other. The angle between the imaging plane and the plane under test is directly proportional to the detection error rate. The angle between the vertical axis where the lens midpoint is located and the line connecting the shooting target and the lens center point is αi and the angle θi between the measured plane and the imaging plane is reciprocal, i.e., αi + θi = 90°. Therefore, using the inspection vehicle as a mobile platform equipped with an industrial camera and based on the YOLOv8 algorithm, the crack recognition of the tunnel tube sheet has the feasibility and the prospect of wide application, which provides a reference method for the detection of cracks in the tunnel tube sheet. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

24 pages, 2968 KiB  
Review
A Survey on Robot Semantic Navigation Systems for Indoor Environments
by Raghad Alqobali, Maha Alshmrani, Reem Alnasser, Asrar Rashidi, Tareq Alhmiedat and Osama Moh’d Alia
Appl. Sci. 2024, 14(1), 89; https://doi.org/10.3390/app14010089 - 21 Dec 2023
Cited by 17 | Viewed by 3362
Abstract
Robot autonomous navigation has become a vital area in the industrial development of minimizing labor-intensive tasks. Most of the recently developed robot navigation systems are based on perceiving geometrical features of the environment, utilizing sensory devices such as laser scanners, range-finders, and microwave [...] Read more.
Robot autonomous navigation has become a vital area in the industrial development of minimizing labor-intensive tasks. Most of the recently developed robot navigation systems are based on perceiving geometrical features of the environment, utilizing sensory devices such as laser scanners, range-finders, and microwave radars to construct an environment map. However, in robot navigation, scene understanding has become essential for comprehending the area of interest and achieving improved navigation results. The semantic model of the indoor environment provides the robot with a representation that is closer to human perception, thereby enhancing the navigation task and human–robot interaction. However, semantic navigation systems require the utilization of multiple components, including geometry-based and vision-based systems. This paper presents a comprehensive review and critical analysis of recently developed robot semantic navigation systems in the context of their applications for semantic robot navigation in indoor environments. Additionally, we propose a set of evaluation metrics that can be considered to assess the efficiency of any robot semantic navigation system. Full article
(This article belongs to the Special Issue Research and Development of Intelligent Robot)
Show Figures

Figure 1

18 pages, 19169 KiB  
Article
Range–Visual–Inertial Odometry with Coarse-to-Fine Image Registration Fusion for UAV Localization
by Yun Hao, Mengfan He, Yuzhen Liu, Jiacheng Liu and Ziyang Meng
Cited by 7 | Viewed by 2667
Abstract
In Global Navigation Satellite System (GNSS)-denied environments, image registration has emerged as a prominent approach to utilize visual information for estimating the position of Unmanned Aerial Vehicles (UAVs). However, traditional image-registration-based localization methods encounter limitations, such as strong dependence on the prior initial [...] Read more.
In Global Navigation Satellite System (GNSS)-denied environments, image registration has emerged as a prominent approach to utilize visual information for estimating the position of Unmanned Aerial Vehicles (UAVs). However, traditional image-registration-based localization methods encounter limitations, such as strong dependence on the prior initial position information. In this paper, we propose a systematic method for UAV geo-localization. In particular, an efficient range–visual–inertial odometry (RVIO) is proposed to provide local tracking, which utilizes measurements from a 1D Laser Range Finder (LRF) to suppress scale drift in the odometry. To overcome the differences in seasons, lighting conditions, and other factors between satellite and UAV images, we propose an image-registration-based geo-localization method in a coarse-to-fine manner that utilizes the powerful representation ability of Convolutional Neural Networks (CNNs). Furthermore, to ensure the accuracy of global optimization, we propose an adaptive weight assignment method based on the evaluation of the quality of image-registration-based localization. The proposed method is extensively evaluated in both synthetic and real-world environments. The results demonstrate that the proposed method achieves global drift-free estimation, enabling UAVs to accurately localize themselves in GNSS-denied environments. Full article
Show Figures

Figure 1

19 pages, 18941 KiB  
Article
Automatically Extracting Rubber Tree Stem Shape from Point Cloud Data Acquisition Using a B-Spline Fitting Program
by Tuyu Li, Yong Zheng, Chang Huang, Jianhua Cao, Lingling Wang and Guihua Wang
Forests 2023, 14(6), 1122; https://doi.org/10.3390/f14061122 - 29 May 2023
Cited by 1 | Viewed by 2257
Abstract
Natural rubber is an important and strategic raw material, used in tires, gloves, and insulating products, that is mainly obtained by cutting the bark of rubber trees. However, the complex contour curve of the rubber tree trunk is hard to fit using a [...] Read more.
Natural rubber is an important and strategic raw material, used in tires, gloves, and insulating products, that is mainly obtained by cutting the bark of rubber trees. However, the complex contour curve of the rubber tree trunk is hard to fit using a tapping machine. Thus, a trunk contour curve collection would be useful for the development of tapping machines. In this study, an acquisition system based on laser-ranging technology was proposed to collect the point cloud data of rubber tree trunks, and a B-spline fitting program was compiled in Matrix Laboratory (MATLAB) to extract the trunks’ contour curves. The acquisition system is composed of power, a controller, a driver, a laser range finder, and data transmission modules. An automatic extraction experiment on the contour curves of rubber tree trunks was carried out to verify the feasibility and accuracy of using the acquisition system. The results showed that the degree of rubber tree trunk characteristic recognition reached 94.67%, which means that the successful extraction of the rubber tree trunk contour curves and the B-spline fitting program are suitable for the extraction of irregular curves of rubber tree trunks. The coefficient of variation of repeated collection was 0.04%, which indicates that changes in relative positions and acquisition directions have little influence on the extraction and the accuracy of the acquisition system, which are high and stable. Therefore, it was unnecessary to adjust the position of the acquisition device before the collecting process, which helped to improve the efficiency of acquisition considerably. The acquisition system proposed in this study is meaningful to the practical production and application of agroforestry and can not only improve the precision of the rubber tapping process by combining with an automatic rubber tapping machine but can also provide technical support for the prediction of rubber wood volume and the development of ring-cutting equipment for other fruit trees. Full article
(This article belongs to the Special Issue Stress Resistance of Rubber Trees: From Genetics to Ecosystem)
Show Figures

Figure 1

11 pages, 1701 KiB  
Communication
Measurement of Trunk Movement during Sit-to-Stand Motion Using Laser Range Finders: A Preliminary Study
by Haruki Toda, Kiyohiro Omori, Katsuya Fukui and Takaaki Chin
Sensors 2023, 23(4), 2022; https://doi.org/10.3390/s23042022 - 10 Feb 2023
Viewed by 2190
Abstract
The sit-to-stand (STS) motion evaluates physical functions in frail older adults. Mounting sensors or using a camera is necessary to measure trunk movement during STS motion. Therefore, we developed a simple measurement method by embedding laser range finders in the backrests and seats [...] Read more.
The sit-to-stand (STS) motion evaluates physical functions in frail older adults. Mounting sensors or using a camera is necessary to measure trunk movement during STS motion. Therefore, we developed a simple measurement method by embedding laser range finders in the backrests and seats of chairs that can be used in daily life situations. The objective of this study was to validate the performance of the proposed measurement method in comparison with that of the optical motion capture (MoCap) system during STS motion. The STS motions of three healthy young adults were simultaneously measured under seven conditions using a chair with embedded sensors and the optical MoCap system. We evaluated the waveform similarity, absolute error, and relationship of the trunk joint angular excursions between these measurement methods. The experimental results indicated high waveform similarity in the trunk flexion phase regardless of STS conditions. Furthermore, a strong relationship was observed between the two measurement methods with respect to the angular excursion of the trunk flexion. Although the angular excursion of the trunk extension exhibited a large error, the developed chair with embedded sensors evaluated trunk flexion during the STS motion, which is a characteristic of frail older adults. Full article
(This article belongs to the Special Issue Wearable or Markerless Sensors for Gait and Movement Analysis)
Show Figures

Figure 1

17 pages, 5939 KiB  
Article
Sensor Equipped UAS for Non-Contact Bridge Inspections: Field Application
by Roya Nasimi, Fernando Moreu and G. Matthew Fricke
Sensors 2023, 23(1), 470; https://doi.org/10.3390/s23010470 - 1 Jan 2023
Cited by 6 | Viewed by 3321
Abstract
In the future, sensors mounted on uncrewed aerial systems (UASs) will play a critical role in increasing both the speed and safety of structural inspections. Environmental and safety concerns make structural inspections and maintenance challenging when conducted using traditional methods, especially for large [...] Read more.
In the future, sensors mounted on uncrewed aerial systems (UASs) will play a critical role in increasing both the speed and safety of structural inspections. Environmental and safety concerns make structural inspections and maintenance challenging when conducted using traditional methods, especially for large structures. The methods developed and tested in the laboratory need to be tested in the field on real-size structures to identify their potential for full implementation. This paper presents results from a full-scale field implementation of a novel sensor equipped with UAS to measure non-contact transverse displacement from a pedestrian bridge. To this end, the authors modified and upgraded a low-cost system that previously showed promise in laboratory and small-scale outdoor settings so that it could be tested on an in-service bridge. The upgraded UAS system uses a commodity drone platform, low-cost sensors including a laser range-finder, and a computer vision-based algorithm with the aim of measuring bridge displacements under load indicative of structural problems. The aim of this research is to alleviate the costs and challenges associated with sensor attachment in bridge inspections and deliver the first prototype of a UAS-based non-contact out-of-plane displacement measurement. This work helps to define the capabilities and limitations of the proposed low-cost system in obtaining non-contact transverse displacement in outdoor experiments. Full article
(This article belongs to the Special Issue Sensor Based Perception for Field Robotics)
Show Figures

Figure 1

11 pages, 3326 KiB  
Article
Interesting Features Finder: A New Approach to Multispectral Image Analysis
by Vincenzo Palleschi, Luciano Marras and Maria Angela Turchetti
Heritage 2022, 5(4), 4089-4099; https://doi.org/10.3390/heritage5040211 - 11 Dec 2022
Cited by 3 | Viewed by 2312
Abstract
In this paper, we discuss a new approach to the analysis of multi/hyper-spectral data sets, based on the Interesting Features Finder (IFF) method. The IFF is a simple algorithm recently proposed in the framework of Laser-Induced Breakdown Spectroscopy (LIBS) spectral analysis for detecting [...] Read more.
In this paper, we discuss a new approach to the analysis of multi/hyper-spectral data sets, based on the Interesting Features Finder (IFF) method. The IFF is a simple algorithm recently proposed in the framework of Laser-Induced Breakdown Spectroscopy (LIBS) spectral analysis for detecting ‘interesting’ spectral features independently of the variance they represent in a set of spectra. To test the usefulness of this method to multispectral analysis, we show in this paper the results of its application on the recovery of a ‘lost’ painting from the Etruscan hypogeal tomb of the Volumni (3rd century BCE—1st century CE) in Perugia, Italy. The results obtained applying the IFF algorithm are compared with the results obtained by applying Blind Source Separation (BSS) techniques and Self-Organized Maps (SOM) to a multispectral set of 17 fluorescence and reflection images. From this comparison emerges the possibility of using the IFF algorithm to obtain rapidly and simultaneously, by varying a single parameter in a range from 0 to 1, several sets of elaborated images all containing the ‘interesting’ features and carrying information comparable to what could have been obtained by BSS and SOM, respectively. Full article
Show Figures

Figure 1

19 pages, 8998 KiB  
Article
Improvement of the Sensor Capability of the NAO Robot by the Integration of a Laser Rangefinder
by Vincenzo Bonaiuto and Andrea Zanela
Appl. Syst. Innov. 2022, 5(6), 105; https://doi.org/10.3390/asi5060105 - 24 Oct 2022
Viewed by 2898
Abstract
This paper focuses on integrating a laser rangefinder system with an anthropomorphic robot (NAO6—Aldebaran, United Robotics Group) to improve its sensory and operational capabilities, as part of a larger project concerning the use of these systems in “assisted living” activities. This additional sensor [...] Read more.
This paper focuses on integrating a laser rangefinder system with an anthropomorphic robot (NAO6—Aldebaran, United Robotics Group) to improve its sensory and operational capabilities, as part of a larger project concerning the use of these systems in “assisted living” activities. This additional sensor enables the robot to reconstruct its surroundings by integrating new information with that identified by the on-board sensors. Thus, it can identify more objects in a scene and detect any obstacles along its navigation path. This feature will improve the efficiency of navigation algorithms, increasing movement competence in environments where people live and work. Indeed, these environments are characterized by details and specificities within a range of distances that best suit the new robot design. The paper presents a laser finder integration project that consists of two different parts, which are as follows: the former, the mechanical part, provided the NAO robot’s head; the latter, the software, provided the robot with proper software drivers to enable integration of the new sensor with its acquisition system. Some experimental results in an actual environment are presented. Full article
(This article belongs to the Special Issue New Trends in Mechatronics and Robotic Systems)
Show Figures

Figure 1

18 pages, 4487 KiB  
Article
A Novel and Simplified Extrinsic Calibration of 2D Laser Rangefinder and Depth Camera
by Wei Zhou, Hailun Chen, Zhenlin Jin, Qiyang Zuo, Yaohui Xu and Kai He
Cited by 1 | Viewed by 2447
Abstract
It is too difficult to directly obtain the correspondence features between the two-dimensional (2D) laser-range-finder (LRF) scan point and camera depth point cloud, which leads to a cumbersome calibration process and low calibration accuracy. To address the problem, we propose a calibration method [...] Read more.
It is too difficult to directly obtain the correspondence features between the two-dimensional (2D) laser-range-finder (LRF) scan point and camera depth point cloud, which leads to a cumbersome calibration process and low calibration accuracy. To address the problem, we propose a calibration method to construct point-line constraint relations between 2D LRF and depth camera observational features by using a specific calibration board. Through the observation of two different poses, we construct the hyperstatic equations group based on point-line constraints and solve the coordinate transformation parameters of 2D LRF and depth camera by the least square (LSQ) method. According to the calibration error and threshold, the number of observation and the observation pose are adjusted adaptively. After experimental verification and comparison with existing methods, the method proposed in this paper easily and efficiently solves the problem of the joint calibration of the 2D LRF and depth camera, and well meets the application requirements of multi-sensor fusion for mobile robots. Full article
(This article belongs to the Topic Advances in Mobile Robotics Navigation)
Show Figures

Figure 1

26 pages, 7318 KiB  
Article
Precise Target Geo-Location of Long-Range Oblique Reconnaissance System for UAVs
by Xuefei Zhang, Guoqin Yuan, Hongwen Zhang, Chuan Qiao, Zhiming Liu, Yalin Ding and Chongyang Liu
Sensors 2022, 22(5), 1903; https://doi.org/10.3390/s22051903 - 28 Feb 2022
Cited by 5 | Viewed by 2962
Abstract
High-precision, real-time, and long-range target geo-location is crucial to UAV reconnaissance and target strikes. Traditional geo-location methods are highly dependent on the accuracies of GPS/INS and the target elevation, which restricts the target geo-location accuracy for LRORS. Moreover, due to the limitations of [...] Read more.
High-precision, real-time, and long-range target geo-location is crucial to UAV reconnaissance and target strikes. Traditional geo-location methods are highly dependent on the accuracies of GPS/INS and the target elevation, which restricts the target geo-location accuracy for LRORS. Moreover, due to the limitations of laser range and the common, real time methods of improving the accuracy, such as laser range finders, DEM and geographic reference data are inappropriate for long-range UAVs. To address the above problems, a set of work patterns and a novel geo-location method are proposed in this paper. The proposed method is not restricted by conditions such as the accuracy of GPS/INS, target elevation, and range finding instrumentation. Specifically, three steps are given, to perform as follows: First, calculate the rough geo-location of the target using the traditional method. Then, according to the rough geo-location, reimage the target. Due to errors in GPS/INS and target elevation, there will be a re-projection error between the actual points of the target and the calculated projection ones. Third, a weighted filtering algorithm is proposed to obtain the optimized target geo-location by processing the reprojection error. Repeat the above process until the target geo-location estimation converges on the true value. The geo-location accuracy is improved by the work pattern and the optimization algorithm. The proposed method was verified by simulation and a flight experiment. The results showed that the proposed method can improve the geo-location accuracy by 38.8 times and 22.5 times compared with traditional methods and DEM methods, respectively. The results indicate that our method is efficient and robust, and can achieve high-precision target geo-location, with an easy implementation. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

23 pages, 55145 KiB  
Article
Signal Processing Platform for Long-Range Multi-Spectral Electro-Optical Systems
by Nikola Latinović, Ilija Popadić, Branko Tomić, Aleksandar Simić, Petar Milanović, Srećko Nijemčević, Miroslav Perić and Mladen Veinović
Sensors 2022, 22(3), 1294; https://doi.org/10.3390/s22031294 - 8 Feb 2022
Cited by 7 | Viewed by 4357
Abstract
In this paper, we present a hardware and software platform for signal processing (SPP) in long-range, multi-spectral, electro-optical systems (MSEOS). Such systems integrate various cameras such as lowlight color, medium or long-wave-infrared thermal and short-wave-infrared cameras together with other sensors such as laser [...] Read more.
In this paper, we present a hardware and software platform for signal processing (SPP) in long-range, multi-spectral, electro-optical systems (MSEOS). Such systems integrate various cameras such as lowlight color, medium or long-wave-infrared thermal and short-wave-infrared cameras together with other sensors such as laser range finders, radars, GPS receivers, etc. on rotational pan-tilt positioner platforms. An SPP is designed with the main goal to control all components of an MSEOS and execute complex signal processing algorithms such as video stabilization, artificial intelligence-based target detection, target tracking, video enhancement, target illumination, multi-sensory image fusion, etc. Such algorithms might be very computationally demanding, so an SPP enables them to run by splitting processing tasks between a field-programmable gate array (FPGA) unit, a multicore microprocessor (MCuP) and a graphic processing unit (GPU). Additionally, multiple SPPs can be linked together via an internal Gbps Ethernet-based network to balance the processing load. A detailed description of the SPP system and experimental results of workloads for typical algorithms on demonstrational MSEOS are given. Finally, we give remarks regarding upgrading SPPs as novel FPGAs, MCuPs and GPUs become available. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

16 pages, 2095 KiB  
Article
Development of a Mobile Robot That Plays Tag with Touch-and-Away Behavior Using a Laser Range Finder
by Yoshitaka Kasai, Yutaka Hiroi, Kenzaburo Miyawaki and Akinori Ito
Appl. Sci. 2021, 11(16), 7522; https://doi.org/10.3390/app11167522 - 17 Aug 2021
Cited by 5 | Viewed by 2470
Abstract
The development of robots that play with humans is a challenging topic for robotics. We are developing a robot that plays tag with human players. To realize such a robot, it needs to observe the players and obstacles around it, chase a target [...] Read more.
The development of robots that play with humans is a challenging topic for robotics. We are developing a robot that plays tag with human players. To realize such a robot, it needs to observe the players and obstacles around it, chase a target player, and touch the player without collision. To achieve this task, we propose two methods. The first one is the player tracking method, by which the robot moves towards a virtual circle surrounding the target player. We used a laser range finder (LRF) as a sensor for player tracking. The second one is a motion control method after approaching the player. Here, the robot moves away from the player by moving towards the opposite side to the player. We conducted a simulation experiment and an experiment using a real robot. Both experiments proved that with the proposed tracking method, the robot properly chased the player and moved away from the player without collision. The contribution of this paper is the development of a robot control method to approach a human and then move away safely. Full article
(This article belongs to the Special Issue Laser Sensing in Robotics)
Show Figures

Graphical abstract

19 pages, 5338 KiB  
Article
Lane Detection Algorithm Using LRF for Autonomous Navigation of Mobile Robot
by Jong-Ho Han and Hyun-Woo Kim
Appl. Sci. 2021, 11(13), 6229; https://doi.org/10.3390/app11136229 - 5 Jul 2021
Cited by 2 | Viewed by 2973
Abstract
This paper proposes a lane detection algorithm using a laser range finder (LRF) for the autonomous navigation of a mobile robot. There are many technologies for ensuring the safety of vehicles, such as airbags, ABS, and EPS. Further, lane detection is a fundamental [...] Read more.
This paper proposes a lane detection algorithm using a laser range finder (LRF) for the autonomous navigation of a mobile robot. There are many technologies for ensuring the safety of vehicles, such as airbags, ABS, and EPS. Further, lane detection is a fundamental requirement for an automobile system that utilizes the external environment information of automobiles. Representative methods of lane recognition are vision-based and LRF-based systems. In the case of a vision-based system, the recognition of the environment of a three-dimensional space becomes excellent only in good conditions for capturing images. However, there are so many unexpected barriers, such as bad illumination, occlusions, vibrations, and thick fog, that the vision-based method cannot be used for satisfying the abovementioned fundamental requirement. In this paper, a three-dimensional lane detection algorithm using LRF that is very robust against illumination is proposed. For the three-dimensional lane detection, the laser reflection difference between the asphalt and the lane according to color and distance has been utilized with the extraction of feature points. Further, a stable tracking algorithm is introduced empirically in this research. The performance of the proposed algorithm of lane detection and tracking has been experimentally verified. Full article
(This article belongs to the Topic Motion Planning and Control for Robotics)
Show Figures

Figure 1

17 pages, 42546 KiB  
Communication
Miniaturised Low-Cost Gamma Scanning Platform for Contamination Identification, Localisation and Characterisation: A New Instrument in the Decommissioning Toolkit
by Yannick Verbelen, Peter G. Martin, Kamran Ahmad, Suresh Kaluvan and Thomas B. Scott
Sensors 2021, 21(8), 2884; https://doi.org/10.3390/s21082884 - 20 Apr 2021
Cited by 6 | Viewed by 3417
Abstract
Formerly clandestine, abandoned and legacy nuclear facilities, whether associated with civil or military applications, represent a significant decommissioning challenge owing to the lack of knowledge surrounding the existence, location and types of radioactive material(s) that may be present. Consequently, mobile and highly deployable [...] Read more.
Formerly clandestine, abandoned and legacy nuclear facilities, whether associated with civil or military applications, represent a significant decommissioning challenge owing to the lack of knowledge surrounding the existence, location and types of radioactive material(s) that may be present. Consequently, mobile and highly deployable systems that are able to identify, spatially locate and compositionally assay contamination ahead of remedial actions are of vital importance. Deployment imposes constraints to dimensions resulting from small diameter access ports or pipes. Herein, we describe a prototype low-cost, miniaturised and rapidly deployable ‘cell characterisation’ gamma-ray scanning system to allow for the examination of enclosed (internal) or outdoor (external) spaces for radioactive ‘hot-spots’. The readout from the miniaturised and lead-collimated gamma-ray spectrometer, that is progressively rastered through a stepped snake motion, is combined with distance measurements derived from a single-point laser range-finder to obtain an array of measurements in order to yield a 3-dimensional point-cloud, based on a polar coordinate system—scaled for radiation intensity. Existing as a smaller and more cost-effective platform than presently available, we are able to produce a millimetre-accurate 3D volumetric rendering of a space—whether internal or external, onto which fully spectroscopic radiation intensity data can be overlain to pinpoint the exact positions at which (even low abundance) gamma-emitting materials exist. Full article
(This article belongs to the Collection Multi-Sensor Information Fusion)
Show Figures

Figure 1

Back to TopTop