Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,635)

Search Parameters:
Keywords = YOLOv7

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 11926 KiB  
Article
Vision-Based Underwater Docking Guidance and Positioning: Enhancing Detection with YOLO-D
by Tian Ni, Can Sima, Wenzhong Zhang, Junlin Wang, Jia Guo and Lindan Zhang
J. Mar. Sci. Eng. 2025, 13(1), 102; https://doi.org/10.3390/jmse13010102 - 7 Jan 2025
Abstract
This study proposed a vision-based underwater vertical docking guidance and positioning method to address docking control challenges for human-operated vehicles (HOVs) and unmanned underwater vehicles (UUVs) under complex underwater visual conditions. A cascaded detection and positioning strategy incorporating fused active and passive markers [...] Read more.
This study proposed a vision-based underwater vertical docking guidance and positioning method to address docking control challenges for human-operated vehicles (HOVs) and unmanned underwater vehicles (UUVs) under complex underwater visual conditions. A cascaded detection and positioning strategy incorporating fused active and passive markers enabled real-time detection of the relative position and pose between the UUV and docking station (DS). A novel deep learning-based network model, YOLO-D, was developed to detect docking markers in real time. YOLO-D employed the Adaptive Kernel Convolution Module (AKConv) to dynamically adjust the sample shapes and sizes and optimize the target feature detection across various scales and regions. It integrated the Context Aggregation Network (CONTAINER) to enhance small-target detection and overall image accuracy, while the bidirectional feature pyramid network (BiFPN) facilitated effective cross-scale feature fusion, improving detection precision for multi-scale and fuzzy targets. In addition, an underwater docking positioning algorithm leveraging multiple markers was implemented. Tests on an underwater docking markers dataset demonstrated that YOLO-D achieved a detection accuracy of [email protected] to 94.5%, surpassing the baseline YOLOv11n with improvements of 1.5% in precision, 5% in recall, and 4.2% in [email protected]. Pool experiments verified the feasibility of the method, achieving a 90% success rate for single-attempt docking and recovery. The proposed approach offered an accurate and efficient solution for underwater docking guidance and target detection, which is of great significance for improving the safety of docking. Full article
(This article belongs to the Special Issue Innovations in Underwater Robotic Software Systems)
Show Figures

Figure 1

21 pages, 4307 KiB  
Article
Developing a Fire Monitoring System Based on MQTT, ESP-NOW, and a REM in Industrial Environments
by Miracle Udurume, Taewoong Hwang, Raihan Uddin, Toufiq Aziz and Insoo Koo
Appl. Sci. 2025, 15(2), 500; https://doi.org/10.3390/app15020500 - 7 Jan 2025
Abstract
Fires and fire hazards in industrial environments pose a significant risk to safety, infrastructure, and the operational community. The need for real-time monitoring systems capable of detecting fires early and transmitting alerts promptly is crucial. This paper presents a fire monitoring system utilizing [...] Read more.
Fires and fire hazards in industrial environments pose a significant risk to safety, infrastructure, and the operational community. The need for real-time monitoring systems capable of detecting fires early and transmitting alerts promptly is crucial. This paper presents a fire monitoring system utilizing lightweight communication protocols, a multi-hop wireless network, and anomaly detection techniques. The system leverages Message Queue Telemetry Transport (MQTT) for efficient message exchange, the ESP-NOW for low-latency and reliable multi-hop wireless communications, and a radio environment map for optimal node placement, eliminating packet loss and ensuring robust data transmission. The proposed system addresses the limitations of traditional fire monitoring systems, providing flexibility, scalability, and robustness in detecting fire. Data collected by ESP32-CAM sensors, which are equipped with pre-trained YOLOv5-based fire detection modules, are processed and transmitted to a central monitoring server. Experimental results demonstrate a 100% success rate in fire detection transmissions, a significant reduction in latency to 150ms, and zero packet loss under REM-guided configuration. These findings validate the system’s suitability for real-time monitoring in high-risk industrial settings. Future work will focus on enhancing the anomaly detection model for greater accuracy, expanding scalability through additional communication protocols, like LoRaWAN, and incorporating adaptive algorithms for real-time network optimization. Full article
22 pages, 13437 KiB  
Article
CSGD-YOLO: A Corn Seed Germination Status Detection Model Based on YOLOv8n
by Wenbin Sun, Meihan Xu, Kang Xu, Dongquan Chen, Jianhua Wang, Ranbing Yang, Quanquan Chen and Songmei Yang
Abstract
Seed quality testing is crucial for ensuring food security and stability. To accurately detect the germination status of corn seeds during the paper medium germination test, this study proposes a corn seed germination status detection model based on YOLO v8n (CSGD-YOLO). Initially, to [...] Read more.
Seed quality testing is crucial for ensuring food security and stability. To accurately detect the germination status of corn seeds during the paper medium germination test, this study proposes a corn seed germination status detection model based on YOLO v8n (CSGD-YOLO). Initially, to alleviate the complexity encountered in conventional models, a lightweight spatial pyramid pooling fast (L-SPPF) structure is engineered to enhance the representation of features. Simultaneously, a detection module dubbed Ghost_Detection, leveraging the GhostConv architecture, is devised to boost detection efficiency while simultaneously reducing parameter counts and computational overhead. Additionally, during the downsampling process of the backbone network, a downsampling module based on receptive field attention convolution(RFAConv) is designed to boost the model’s focus on areas of interest. This study further proposes a new module named C2f-UIB-iAFF based on the faster implementation of cross-stage partial bottleneck with two convolutions(C2f), universal inverted bottleneck (UIB), and iterative attention feature fusion(iAFF) to replace the original C2f in YOLOv8, streamlining model complexity and augmenting the feature fusion prowess of the residual structure. Experiments conducted on the collected corn seed germination dataset show that CSGD-YOLO requires only 1.91M parameters and 5.21G floating-point operations(FLOPs). The detection precision(P), recall(R), mAP0.5, and mAP0.50:0.95 achieved are 89.44%, 88.82%, 92.99%, and 80.38%. Compared with the YOLO v8n, CSGD-YOLO improves performance in terms of accuracy, model size, parameter number, and floating-point operation counts by 1.39, 1.43, 1.77, and 2.95 percentage points, respectively. Therefore, CSGD-YOLO outperforms existing mainstream target detection models in detection performance and model complexity, making it suitable for detecting corn seed germination status and providing a reference for rapid germination rate detection. Full article
(This article belongs to the Section Precision and Digital Agriculture)
18 pages, 4447 KiB  
Article
SSHP-YOLO: A High Precision Printed Circuit Board (PCB) Defect Detection Algorithm with a Small Sample
by Jianxin Wang, Lingcheng Ma, Zixin Li, Yuan Cao and Hao Zhang
Abstract
In the domain of printed circuit board (PCB) defect detection, challenges such as missed detections and false positives remain prevalent. To address these challenges, we propose a small-sample, high-precision PCB defect detection algorithm, called SSHP-YOLO. The proposed method incorporates an ELAN-C module that [...] Read more.
In the domain of printed circuit board (PCB) defect detection, challenges such as missed detections and false positives remain prevalent. To address these challenges, we propose a small-sample, high-precision PCB defect detection algorithm, called SSHP-YOLO. The proposed method incorporates an ELAN-C module that merges the convolutional block attention module (CBAM) with the efficient layer aggregation network (ELAN), thereby enhancing the model’s focus on defect features and improving the detection of minute defect details. Furthermore, we introduce the ASPPCSPC structure, which extracts multi-scale features using pyramid pooling combined with dilated convolutions while maintaining the resolution of feature maps. This design improves the detection accuracy and robustness, thereby enhancing the algorithm’s generalization ability. Additionally, we employ the SIoU loss function to optimize the regression between the predicted and ground-truth bounding boxes, thus improving the localization accuracy of minute defects. The experimental results show that SSHP-YOLO achieves a recall rate that is 11.84% higher than traditional YOLOv7, with a mean average precision (mAP) of 97.80%. This leads to a substantial improvement in the detection accuracy, effectively mitigating issues related to missed and false detections in PCB defect detection tasks. Full article
(This article belongs to the Section Computer Science & Engineering)
19 pages, 946 KiB  
Article
PDS-YOLO: A Real-Time Detection Algorithm for Pipeline Defect Detection
by Ke Zhang, Longxiao Qin and Liming Zhu
Viewed by 249
Abstract
Regular inspection of urban drainage pipes can effectively maintain the reliable operation of the drainage system and the production safety of residents. Aiming at the shortcomings of the CCTV inspection method used in the drainage pipe defect detection task, a PDS-YOLO algorithm that [...] Read more.
Regular inspection of urban drainage pipes can effectively maintain the reliable operation of the drainage system and the production safety of residents. Aiming at the shortcomings of the CCTV inspection method used in the drainage pipe defect detection task, a PDS-YOLO algorithm that can be deployed in the pipe defect detection system is proposed to overcome the problems of inefficiency of manual inspection and the possibility of errors and omissions. First, the C2f-PCN module was introduced to decrease the model sophistication and decrease the model weight file size. Second, to enhance the model’s capability in detecting pipe defect edges, we incorporate the SPDSC structure within the neck network. Introducing a hybrid local channel MLCA attention mechanism and Wise-IoU loss function based on a dynamic focusing mechanism, the model improves the precision of segmentation without adding extra computational cost, and enhances the extraction and expression of pipeline defect features in the model. The experimental outcomes indicate that the mAP, F1-score, precision, and recall of the PDS-YOLO algorithm are improved by 3.4%, 4%, 4.8%, and 4.0%, respectively, compared to the original algorithm. Additionally, the model achieves a reduction in both the model’s parameter and GFLOPs by 8.6% and 12.3%, respectively. It saves computational resources while improving the detection accuracy, and provides a more lightweight model for the defect detection system with tight computing power. Finally, the PDS-YOLOv8n model is deployed to the NVIDIA Jetson Nano, the central console of the mobile embedded system, and the weight files are optimized using TensorRT. The test results show that the velocity of the model’s inference capabilities in the embedded device is improved from 5.4 FPS to 19.3 FPS, which can basically satisfy the requirements of real-time pipeline defect detection assignments in mobile scenarios. Full article
10 pages, 827 KiB  
Technical Note
A Novel and Automated Approach to Detect Sea- and Land-Based Aquaculture Facilities
by Maxim Veroli, Marco Martinoli, Arianna Martini, Riccardo Napolitano, Domitilla Pulcini, Nicolò Tonachella and Fabrizio Capoccioni
Viewed by 230
Abstract
Aquaculture is a globally widespread practice and the world’s fastest-growing food sector and requires technological advances to both increase productivity and minimize environmental impacts. Monitoring the sector is one of the priorities of state governments, international organizations, such as the Food and Agriculture [...] Read more.
Aquaculture is a globally widespread practice and the world’s fastest-growing food sector and requires technological advances to both increase productivity and minimize environmental impacts. Monitoring the sector is one of the priorities of state governments, international organizations, such as the Food and Agriculture Organization of the United States (FAO), and the European Commission. Data collection in aquaculture, particularly information on the location, number, and size of production facilities, is challenging due to the time required, the extent of the area to be monitored, the frequent changes in farming infrastructures and licenses, and the lack of automated tools. Such information is usually obtained through direct communications (e.g., phone calls and e-mails) with aquaculture producers and is rarely confirmed with on-site measurements. This study describes an innovative and automated method to obtain data on the number and placement of structures for marine and freshwater finfish farming through a YOLOv4 model trained on high-resolution images. High-resolution images were extracted from Google Maps to test their use with the YOLO model for the identification and geolocation of both land (raceways used in salmonids farming) and sea-based (floating sea cages used in seabream, seabass, and meagre farming) aquaculture systems in Italy. An overall accuracy of approximately 85% of correct object recognition of the target class was achieved. Model accuracy was tested with a dataset that includes images from Tuscany (Italy), where all these farm typologies are represented. The results demonstrate that the approach proposed can identify, characterize, and geolocate sea- and land-based aquaculture structures without performing any post-processing procedure, by directly applying customized deep learning and artificial intelligence algorithms. Full article
(This article belongs to the Special Issue The Future of Artificial Intelligence in Agriculture)
Show Figures

Graphical abstract

23 pages, 5106 KiB  
Article
A Real-Time Green and Lightweight Model for Detection of Liquefied Petroleum Gas Cylinder Surface Defects Based on YOLOv5
by Burhan Duman
Appl. Sci. 2025, 15(1), 458; https://doi.org/10.3390/app15010458 - 6 Jan 2025
Viewed by 240
Abstract
Industry requires defect detection to ensure the quality and safety of products. In resource-constrained devices, real-time speed, accuracy, and computational efficiency are the most critical requirements for defect detection. This paper presents a novel approach for real-time detection of surface defects on LPG [...] Read more.
Industry requires defect detection to ensure the quality and safety of products. In resource-constrained devices, real-time speed, accuracy, and computational efficiency are the most critical requirements for defect detection. This paper presents a novel approach for real-time detection of surface defects on LPG cylinders, utilising an enhanced YOLOv5 architecture referred to as GLDD-YOLOv5. The architecture integrates ghost convolution and ECA blocks to improve feature extraction with less computational overhead in the network’s backbone. It also modifies the P3–P4 head structure to increase detection speed. These changes enable the model to focus more effectively on small and medium-sized defects. Based on comparative analysis with other YOLO models, the proposed method demonstrates superior performance. Compared to the base YOLOv5s model, the proposed method achieved a 4.6% increase in average accuracy, a 44% reduction in computational cost, a 45% decrease in parameter counts, and a 26% reduction in file size. In experimental evaluations on the RTX2080Ti, the model achieved an inference rate of 163.9 FPS with a total carbon footprint of 0.549 × 10−3 gCO2e. The proposed technique offers an efficient and robust defect detection model with an eco-friendly solution compatible with edge computing devices. Full article
(This article belongs to the Section Green Sustainable Science and Technology)
Show Figures

Figure 1

20 pages, 4126 KiB  
Article
FD-YOLO: A YOLO Network Optimized for Fall Detection
by Hoseong Hwang, Donghyun Kim and Hochul Kim
Appl. Sci. 2025, 15(1), 453; https://doi.org/10.3390/app15010453 - 6 Jan 2025
Viewed by 204
Abstract
Falls are defined by the World Health Organization (WHO) as incidents in which an individual unintentionally falls to the ground or a lower level. Falls represent a serious public health issue, ranking as the second leading cause of death from unintentional injuries, following [...] Read more.
Falls are defined by the World Health Organization (WHO) as incidents in which an individual unintentionally falls to the ground or a lower level. Falls represent a serious public health issue, ranking as the second leading cause of death from unintentional injuries, following traffic accidents. While fall prevention is crucial, prompt intervention after a fall is equally necessary. Delayed responses can result in severe complications, reduced recovery potential, and a negative impact on quality of life. This study focuses on detecting fall situations using image-based methods. The fall images utilized in this research were created by combining three open-source datasets to enhance generalization and adaptability across diverse scenarios. Because falls must be detected promptly, the YOLO (You Only Look Once) network, known for its effectiveness in real-time detection, was applied. To better capture the complex body structures and interactions with the floor during a fall, two key techniques were integrated. First, a global attention module (GAM) based on the Convolutional Block Attention Module (CBAM) was employed to improve detection performance. Second, a Transformer-based Swin Transformer module was added to effectively learn global spatial information and enable a more detailed analysis of body movements. This study prioritized minimizing missed fall detections (false negatives, FN) as the key performance metric, since undetected falls pose greater risks than false detections. The proposed Fall Detection YOLO (FD-YOLO) network, developed by integrating the Swin Transformer and GAM into YOLOv9, achieved a high [email protected] score of 0.982 and recorded only 134 missed fall incidents, demonstrating optimal performance. When implemented in environments equipped with standard camera systems, the proposed FD-YOLO network is expected to enable real-time fall detection and prompt post-fall responses. This technology has the potential to significantly improve public health and safety by preventing fall-related injuries and facilitating rapid interventions. Full article
Show Figures

Figure 1

17 pages, 6914 KiB  
Article
YOLO-TC: An Optimized Detection Model for Monitoring Safety-Critical Small Objects in Tower Crane Operations
by Dong Ding, Zhengrong Deng and Rui Yang
Algorithms 2025, 18(1), 27; https://doi.org/10.3390/a18010027 - 6 Jan 2025
Viewed by 179
Abstract
Ensuring operational safety within high-risk environments, such as construction sites, is paramount, especially for tower crane operations where distractions can lead to severe accidents. Despite existing behavioral monitoring approaches, the task of identifying small yet hazardous objects like mobile phones and cigarettes in [...] Read more.
Ensuring operational safety within high-risk environments, such as construction sites, is paramount, especially for tower crane operations where distractions can lead to severe accidents. Despite existing behavioral monitoring approaches, the task of identifying small yet hazardous objects like mobile phones and cigarettes in real time remains a significant challenge in ensuring operator compliance and site safety. Traditional object detection models often fall short in crane operator cabins due to complex lighting conditions, cluttered backgrounds, and the small physical scale of target objects. To address these challenges, we introduce YOLO-TC, a refined object detection model tailored specifically for tower crane monitoring applications. Built upon the robust YOLOv7 architecture, our model integrates a novel channel–spatial attention mechanism, ECA-CBAM, into the backbone network, enhancing feature extraction without an increase in parameter count. Additionally, we propose the HA-PANet architecture to achieve progressive feature fusion, addressing scale disparities and prioritizing small object detection while reducing noise from unrelated objects. To improve bounding box regression, the MPDIoU Loss function is employed, resulting in superior accuracy for small, critical objects in dense environments. The experimental results on both the PASCAL VOC benchmark and a custom dataset demonstrate that YOLO-TC outperforms baseline models, showcasing its robustness in identifying high-risk objects under challenging conditions. This model holds significant promise for enhancing automated safety monitoring, potentially reducing occupational hazards by providing a proactive, resilient solution for real-time risk detection in tower crane operations. Full article
(This article belongs to the Special Issue Advances in Computer Vision: Emerging Trends and Applications)
Show Figures

Figure 1

20 pages, 14871 KiB  
Article
An Underwater Object Recognition System Based on Improved YOLOv11
by Shun Cheng, Yan Han, Zhiqian Wang, Shaojin Liu, Bo Yang and Jianrong Li
Viewed by 300
Abstract
Common underwater target recognition systems suffer from low accuracy, high energy consumption, and low levels of automation. This paper introduces an underwater target recognition system based on the Jetson Xavier NX platform, which deploys an improved YOLOv11 recognition algorithm. During operation, the Jetson [...] Read more.
Common underwater target recognition systems suffer from low accuracy, high energy consumption, and low levels of automation. This paper introduces an underwater target recognition system based on the Jetson Xavier NX platform, which deploys an improved YOLOv11 recognition algorithm. During operation, the Jetson Xavier NX invokes an industrial camera to capture underwater target images, which are then processed by the improved YOLOv11 network for inference. The recognized information is transmitted via a serial port to an STM32 control board, which adaptively adjusts the lighting system to enhance image clarity based on the target information. Finally, the system controls an actuator to release a buoyant ball with positioning capabilities and communicates with the shore. On the ROUD dataset, the improved YOLOv11 algorithm achieves an accuracy of 87.5%, with a parameter size of 2.58M and a floating-point operation count of 6.3G, outperforming all current models. Compared to the original YOLOv11, the parameter size is reduced by 5% and the floating-point operation count by 0.3G. The improved DD-YOLOv11 also shows good performance on the URPC2020 dataset. After on-site experiments and hardware–software integration tests, all functions operate normally. The system is capable of identifying a specific underwater target with an accuracy rate of over 85%, simultaneously releasing communication buoys and successfully establishing communication with the shore base. This indicates that the underwater target recognition system meets the requirements of being lightweight, high-precision, and highly automated. Full article
Show Figures

Figure 1

28 pages, 43934 KiB  
Article
A Cross-Stage Focused Small Object Detection Network for Unmanned Aerial Vehicle Assisted Maritime Applications
by Gege Ding, Jiayue Liu, Dongsheng Li, Xiaming Fu, Yucheng Zhou, Mingrui Zhang, Wantong Li, Yanjuan Wang, Chunxu Li and Xiongfei Geng
J. Mar. Sci. Eng. 2025, 13(1), 82; https://doi.org/10.3390/jmse13010082 - 5 Jan 2025
Viewed by 447
Abstract
The application potential of unmanned aerial vehicles (UAVs) in marine search and rescue is especially of concern for the ongoing advancement of visual recognition technology and image processing technology. Limited computing resources, insufficient pixel representation for small objects in high-altitude images, and challenging [...] Read more.
The application potential of unmanned aerial vehicles (UAVs) in marine search and rescue is especially of concern for the ongoing advancement of visual recognition technology and image processing technology. Limited computing resources, insufficient pixel representation for small objects in high-altitude images, and challenging visibility conditions hinder UAVs’ target recognition performance in maritime search and rescue operations, highlighting the need for further optimization and enhancement. This study introduces an innovative detection framework, CFSD-UAVNet, designed to boost the accuracy of detecting minor objects within imagery captured from elevated altitudes. To improve the performance of the feature pyramid network (FPN) and path aggregation network (PAN), a newly designed PHead structure was proposed, focusing on better leveraging shallow features. Then, structural pruning was applied to refine the model and enhance its capability in detecting small objects. Moreover, to conserve computational resources, a lightweight CED module was introduced to reduce parameters and conserve the computing resources of the UAV. At the same time, in each detection layer, a lightweight CRE module was integrated, leveraging attention mechanisms and detection heads to enhance precision for small object detection. Finally, to enhance the model’s robustness, WIoUv2 loss function was employed, ensuring a balanced treatment of positive and negative samples. The CFSD-UAVNet model was evaluated on the publicly available SeaDronesSee maritime dataset and compared with other cutting-edge algorithms. The experimental results showed that the CFSD-UAVNet model achieved an mAP@50 of 80.1% with only 1.7 M parameters and a computational cost of 10.2 G, marking a 12.1% improvement over YOLOv8 and a 4.6% increase compared to DETR. The novel CFSD-UAVNet model effectively balances the limitations of scenarios and detection accuracy, demonstrating application potential and value in the field of UAV-assisted maritime search and rescue. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

23 pages, 5531 KiB  
Article
Optimal Coverage Path Planning for UAV-Assisted Multiple USVs: Map Modeling and Solutions
by Shaohua Pan, Xiaosu Xu, Yi Cao and Liang Zhang
Viewed by 304
Abstract
With the increasing demand for marine monitoring, the use of coverage path planning based on unmanned aerial vehicle (UAV) aerial images to assist multiple unmanned surface vehicles (USVs) has shown great potential in marine applications. However, achieving accurate map modeling and optimal path [...] Read more.
With the increasing demand for marine monitoring, the use of coverage path planning based on unmanned aerial vehicle (UAV) aerial images to assist multiple unmanned surface vehicles (USVs) has shown great potential in marine applications. However, achieving accurate map modeling and optimal path planning are still key challenges that restrict its widespread application. To this end, an innovative coverage path planning algorithm for UAV-assisted multiple USVs is proposed. First, a semantic segmentation algorithm based on the YOLOv5-assisted prompting segment anything model (SAM) is designed to establish an accurate map model. By refining the axial, length, width, and coordinate information of obstacles, the algorithm enables YOLOv5 to generate accurate object bounding box prompts and then assists SAM in automatically and accurately extracting obstacles and coastlines in complex scenes. Based on this accurate map model, a multi-objective stepwise optimization coverage path planning algorithm is further proposed. The algorithm divides the complete path into two parts, the straight paths and the turning paths, and both the path length and the number of turns is designed, respectively, to optimize each type of path step by step, which significantly improves the coverage effect. Experiments prove that in various complex marine coverage scenarios, the proposed algorithm achieves 100% coverage, the redundancy rate is less than 2%, and it is superior to existing advanced algorithms in path length and number of turns. This research provides a feasible technical solution for efficient and accurate marine coverage tasks and lays the foundation for unmanned marine supervision. Full article
Show Figures

Figure 1

20 pages, 6100 KiB  
Article
Rearview Camera-Based Blind-Spot Detection and Lane Change Assistance System for Autonomous Vehicles
by Yunhee Lee and Manbok Park
Appl. Sci. 2025, 15(1), 419; https://doi.org/10.3390/app15010419 - 4 Jan 2025
Viewed by 535
Abstract
This paper focuses on a method of rearview camera-based blind-spot detection and a lane change assistance system for autonomous vehicles, utilizing a convolutional neural network and lane detection. In this study, we propose a method for providing real-time warnings to autonomous vehicles and [...] Read more.
This paper focuses on a method of rearview camera-based blind-spot detection and a lane change assistance system for autonomous vehicles, utilizing a convolutional neural network and lane detection. In this study, we propose a method for providing real-time warnings to autonomous vehicles and drivers regarding collision risks during lane-changing maneuvers. We propose a method for lane detection to delineate the area for blind-spot detection and for measuring time to collision—both utilized to ascertain the vehicle’s location and compensate for vertical vibrations caused by vehicle movement. The lane detection method uses edge detection on an input image to extract lane markings by employing edge pairs consisting of positive and negative edges. Lanes were extracted through third-polynomial fitting of the extracted lane markings, with each lane marking being tracked using the results from the previous frame detections. Using the vanishing point where the two lanes converge, the camera calibration information is updated to compensate for the vertical vibrations caused by vehicle movement. Additionally, the proposed method utilized YOLOv9 for object detection, leveraging lane information to define the region of interest (ROI) and detect small-sized objects. The object detection achieved a precision of 90.2% and a recall of 82.8%. The detected object information was subsequently used to calculate the collision risk. A collision risk assessment was performed for various objects using a three-level collision warning system that adapts to the relative speed of obstacles. The proposed method demonstrated a performance of 11.64 fps with an execution time of 85.87 ms. It provides real-time warnings to both drivers and autonomous vehicles regarding potential collisions with detected objects. Full article
Show Figures

Figure 1

16 pages, 11407 KiB  
Article
YOLOv8-LCNET: An Improved YOLOv8 Automatic Crater Detection Algorithm and Application in the Chang’e-6 Landing Area
by Jing Nan, Yexin Wang, Kaichang Di, Bin Xie, Chenxu Zhao, Biao Wang, Shujuan Sun, Xiangjin Deng, Hong Zhang and Ruiqing Sheng
Sensors 2025, 25(1), 243; https://doi.org/10.3390/s25010243 - 3 Jan 2025
Viewed by 363
Abstract
The Chang’e-6 (CE-6) landing area on the far side of the Moon is located in the southern part of the Apollo basin within the South Pole–Aitken (SPA) basin. The statistical analysis of impact craters in this region is crucial for ensuring a safe [...] Read more.
The Chang’e-6 (CE-6) landing area on the far side of the Moon is located in the southern part of the Apollo basin within the South Pole–Aitken (SPA) basin. The statistical analysis of impact craters in this region is crucial for ensuring a safe landing and supporting geological research. Aiming at existing impact crater identification problems such as complex background, low identification accuracy, and high computational costs, an efficient impact crater automatic detection model named YOLOv8-LCNET (YOLOv8-Lunar Crater Net) based on the YOLOv8 network is proposed. The model first incorporated a Partial Self-Attention (PSA) mechanism at the end of the Backbone, allowing the model to enhance global perception and reduce missed detections with a low computational cost. Then, a Gather-and-Distribute mechanism (GD) was integrated into the Neck, enabling the model to fully fuse multi-level feature information and capture global information, enhancing the model’s ability to detect impact craters of various sizes. The experimental results showed that the YOLOv8-LCNET model performs well in the impact crater detection task, achieving 87.7% Precision, 84.3% Recall, and 92% AP, which were 24.7%, 32.7%, and 37.3% higher than the original YOLOv8 model. The improved YOLOv8 model was then used for automatic crater detection in the CE-6 landing area (246 km × 135 km, with a DOM resolution of 3 m/pixel), resulting in a total of 770,671 craters, ranging from 13 m to 19,882 m in diameter. The analysis of this impact crater catalogue has provided critical support for landing site selection and characterization of the CE-6 mission and lays the foundation for future lunar geological studies. Full article
Show Figures

Figure 1

22 pages, 4204 KiB  
Article
AquaYOLO: Enhancing YOLOv8 for Accurate Underwater Object Detection for Sonar Images
by Yanyang Lu, Jingjing Zhang, Qinglang Chen, Chengjun Xu, Muhammad Irfan and Zhe Chen
J. Mar. Sci. Eng. 2025, 13(1), 73; https://doi.org/10.3390/jmse13010073 - 3 Jan 2025
Viewed by 359
Abstract
Object detection in underwater environments presents significant challenges due to the inherent limitations of sonar imaging, such as noise, low resolution, lack of texture, and color information. This paper introduces AquaYOLO, an enhanced YOLOv8 version specifically designed to improve object detection accuracy in [...] Read more.
Object detection in underwater environments presents significant challenges due to the inherent limitations of sonar imaging, such as noise, low resolution, lack of texture, and color information. This paper introduces AquaYOLO, an enhanced YOLOv8 version specifically designed to improve object detection accuracy in underwater sonar images. AquaYOLO replaces traditional convolutional layers with a residual block in the backbone network to enhance feature extraction. In addition, we introduce Dynamic Selection Aggregation Module (DSAM) and Context-Aware Feature Selection (CAFS) in the neck network. These modifications allow AquaYOLO to capture intricate details better and reduce feature redundancy, leading to improved performance in underwater object detection tasks. The model is evaluated on two standard underwater sonar datasets, UATD and Marine Debris, demonstrating superior accuracy and robustness compared to baseline models. Full article
(This article belongs to the Special Issue Application of Deep Learning in Underwater Image Processing)
Show Figures

Figure 1

Back to TopTop