Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (585)

Search Parameters:
Keywords = vision system automation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2425 KiB  
Article
Online Self-Supervised Learning for Accurate Pick Assembly Operation Optimization
by Sergio Valdés, Marco Ojer and Xiao Lin
Viewed by 192
Abstract
The demand for flexible automation in manufacturing has increased, incorporating vision-guided systems for object grasping. However, a key challenge is in-hand error, where discrepancies between the actual and estimated positions of an object in the robot’s gripper impact not only the grasp but [...] Read more.
The demand for flexible automation in manufacturing has increased, incorporating vision-guided systems for object grasping. However, a key challenge is in-hand error, where discrepancies between the actual and estimated positions of an object in the robot’s gripper impact not only the grasp but also subsequent assembly stages. Corrective strategies used to compensate for misalignment can increase cycle times or rely on pre-labeled datasets, offline training, and validation processes, delaying deployment and limiting adaptability in dynamic industrial environments. Our main contribution is an online self-supervised learning method that automates data collection, training, and evaluation in real time, eliminating the need for offline processes. Building on this, our system collects real-time data during each assembly cycle, using corrective strategies to adjust the data and autonomously labeling them via a self-supervised approach. It then builds and evaluates multiple regression models through an auto machine learning implementation. The system selects the best-performing model to correct the misalignment and dynamically chooses between corrective strategies and the learned model, optimizing the cycle times and improving the performance during the cycle, without halting the production process. Our experiments show a significant reduction in the cycle time while maintaining the performance. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

19 pages, 13592 KiB  
Article
Firefighting with Conductive Aerosol-Assisted Vortex Rings
by John LaRocco, Qudsia Tahmina, Stanley Essel and John Simonis
Technologies 2025, 13(1), 10; https://doi.org/10.3390/technologies13010010 - 27 Dec 2024
Viewed by 417
Abstract
Conventional firefighting tools and methods can strain water sources, require toxic foams, or rely on pre-installed countermeasures. A low-cost, non-toxic, and portable option was previously overlooked in portable devices: electrically assisted “ionic wind” fire suppression. Conductive aerosols, carried by vortex rings, can potentially [...] Read more.
Conventional firefighting tools and methods can strain water sources, require toxic foams, or rely on pre-installed countermeasures. A low-cost, non-toxic, and portable option was previously overlooked in portable devices: electrically assisted “ionic wind” fire suppression. Conductive aerosols, carried by vortex rings, can potentially extend the length of an electric arc and suppress fires. After the simulation, two prototype vortex ring launchers were compared, one using compressed air and another using an elastic diaphragm. The efficiency of each test case was assessed with a purpose-built automated image analysis system. The compressed air vortex launcher had a significantly higher efficiency than the elastic diaphragm prototype, with a p-value of 0.0006. Regardless of the prototype or the use of conductive aerosols, the device had an effective range of up to 1.98 m. The highest reliability of 90 ± 4.1% was achieved at 1.52 m from the launcher. The observations with compressed air launcher results saw no significant difference regarding the use of the conductive aerosol. Further investigation of the concept requires a systematic examination of other types of fires, electronic optimization, permutations of chemicals and concentrations, other types of vortex generation, and human factors. The computer vision system could also be used to further detect and target active fires. Beyond firefighting, the device can be adapted to applications ranging from manufacturing to aerospace. Regardless of the use of conductive aerosols, handheld vortex ring generators are a versatile, potential firefighting tool. Full article
(This article belongs to the Section Environmental Technology)
Show Figures

Figure 1

19 pages, 8495 KiB  
Article
Design and Development of a Precision Defect Detection System Based on a Line Scan Camera Using Deep Learning
by Byungcheol Kim, Moonsun Shin and Seonmin Hwang
Appl. Sci. 2024, 14(24), 12054; https://doi.org/10.3390/app142412054 - 23 Dec 2024
Viewed by 331
Abstract
The manufacturing industry environment is rapidly evolving into smart manufacturing. It prioritizes digital innovations such as AI and digital transformation (DX) to increase productivity and create value through automation and intelligence. Vision systems for defect detection and quality control are being implemented across [...] Read more.
The manufacturing industry environment is rapidly evolving into smart manufacturing. It prioritizes digital innovations such as AI and digital transformation (DX) to increase productivity and create value through automation and intelligence. Vision systems for defect detection and quality control are being implemented across industries, including electronics, semiconductors, printing, metal, food, and packaging. Small and medium-sized manufacturing companies are increasingly demanding smart factory solutions for quality control to create added value and enhance competitiveness. In this paper, we design and develop a high-speed defect detection system based on a line-scan camera using deep learning. The camera is positioned for side-view imaging, allowing for detailed inspection of the component mounting and soldering quality on PCBs. To detect defects on PCBs, the system gathers extensive images of both flawless and defective products to train a deep learning model. An AI engine generated through this deep learning process is then applied to conduct defect inspections. The developed high-speed defect detection system was evaluated to have an accuracy of 99.5% in the experiment. This will be highly beneficial for precision quality management in small- and medium-sized enterprises Full article
(This article belongs to the Special Issue Future Information & Communication Engineering 2024)
Show Figures

Figure 1

32 pages, 19029 KiB  
Article
Towards Context-Rich Automated Biodiversity Assessments: Deriving AI-Powered Insights from Camera Trap Data
by Paul Fergus, Carl Chalmers, Naomi Matthews, Stuart Nixon, André Burger, Oliver Hartley, Chris Sutherland, Xavier Lambin, Steven Longmore and Serge Wich
Sensors 2024, 24(24), 8122; https://doi.org/10.3390/s24248122 - 19 Dec 2024
Viewed by 351
Abstract
Camera traps offer enormous new opportunities in ecological studies, but current automated image analysis methods often lack the contextual richness needed to support impactful conservation outcomes. Integrating vision–language models into these workflows could address this gap by providing enhanced contextual understanding and enabling [...] Read more.
Camera traps offer enormous new opportunities in ecological studies, but current automated image analysis methods often lack the contextual richness needed to support impactful conservation outcomes. Integrating vision–language models into these workflows could address this gap by providing enhanced contextual understanding and enabling advanced queries across temporal and spatial dimensions. Here, we present an integrated approach that combines deep learning-based vision and language models to improve ecological reporting using data from camera traps. We introduce a two-stage system: YOLOv10-X to localise and classify species (mammals and birds) within images and a Phi-3.5-vision-instruct model to read YOLOv10-X bounding box labels to identify species, overcoming its limitation with hard-to-classify objects in images. Additionally, Phi-3.5 detects broader variables, such as vegetation type and time of day, providing rich ecological and environmental context to YOLO’s species detection output. When combined, this output is processed by the model’s natural language system to answer complex queries, and retrieval-augmented generation (RAG) is employed to enrich responses with external information, like species weight and IUCN status (information that cannot be obtained through direct visual analysis). Combined, this information is used to automatically generate structured reports, providing biodiversity stakeholders with deeper insights into, for example, species abundance, distribution, animal behaviour, and habitat selection. Our approach delivers contextually rich narratives that aid in wildlife management decisions. By providing contextually rich insights, our approach not only reduces manual effort but also supports timely decision making in conservation, potentially shifting efforts from reactive to proactive. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

24 pages, 4541 KiB  
Article
Development of a Low-Cost Automated Injection Molding Device for Sustainable Plastic Recycling and Circular Economy Applications
by Ananta Sinchai, Kunthorn Boonyang and Thanakorn Simmala
Inventions 2024, 9(6), 124; https://doi.org/10.3390/inventions9060124 - 17 Dec 2024
Viewed by 400
Abstract
In response to the critical demand for innovative solutions to tackle plastic pollution, this research presents a low-cost, fully automated plastic injection molding system designed to convert waste into sustainable products. Constructed entirely from repurposed materials, the apparatus focuses on processing high-density polyethylene [...] Read more.
In response to the critical demand for innovative solutions to tackle plastic pollution, this research presents a low-cost, fully automated plastic injection molding system designed to convert waste into sustainable products. Constructed entirely from repurposed materials, the apparatus focuses on processing high-density polyethylene (HDPE) efficiently without hydraulic components, thereby enhancing eco-friendliness and accessibility. Performance evaluations identified an optimal molding temperature of 200 °C, yielding consistent products with a minimal weight deviation of 4.17%. The key operational parameters included a motor speed of 525 RPM, a gear ratio of 1:30, and an inverter frequency of 105 Hz. Further tests showed that processing temperatures of 210 °C and 220 °C, with injection times of 15 to 35 s, yielded optimal surface finish and complete filling. The surface finish, assessed through image intensity variation, had a low coefficient of variation (≤5%), while computer vision evaluation confirmed the full filling of all specimens in this range. A laser-based overflow detection system has minimized material waste, proving effective in small-scale, community recycling. This study underscores the potential of low-cost automated systems to advance the practices of circular economies and enhance localized plastic waste management. Future research will focus on automation, temperature precision, material adaptability, and emissions management. Full article
(This article belongs to the Section Inventions and Innovation in Advanced Manufacturing)
Show Figures

Figure 1

16 pages, 4714 KiB  
Article
Computer Vision System for Multi-Robot Construction Waste Management: Integrating Cloud and Edge Computing
by Zeli Wang, Xincong Yang, Xianghan Zheng, Daoyin Huang and Binfei Jiang
Buildings 2024, 14(12), 3999; https://doi.org/10.3390/buildings14123999 - 17 Dec 2024
Viewed by 425
Abstract
Sorting is an important construction waste management tool to increase recycling rates and reduce pollution. Previous studies have used robots to improve the efficiency of construction waste recycling. However, in large construction sites, it is difficult for a single robot to accomplish the [...] Read more.
Sorting is an important construction waste management tool to increase recycling rates and reduce pollution. Previous studies have used robots to improve the efficiency of construction waste recycling. However, in large construction sites, it is difficult for a single robot to accomplish the task quickly, and multiple robots working together are a better option. Most construction waste recycling robotic systems are developed based on a client-server framework, which means that all robots need to be continuously connected to their respective cloud servers. Such systems are low in robustness in complex environments and waste a lot of computational resources. Therefore, in this paper, we propose a pixel-level automatic construction waste recognition platform with high robustness and low computational resource requirements by combining multiple computer vision technologies with edge computing and cloud computing platforms. Experiments show that the computing platform proposed in this study can achieve a recognition speed of 23.3 fps and a recognition accuracy of 90.81% at the edge computing platform without the help of network and cloud servers. This is 23 times faster than the algorithm used in previous research. Meanwhile, the computing platform proposed in this study achieves 93.2% instance segmentation accuracy on the cloud server side. Notably, this system allows multiple robots to operate simultaneously at the same construction site using only a single server without compromising efficiency, which significantly reduces costs and promotes the adoption of automated construction waste recycling robots. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

22 pages, 10652 KiB  
Article
An Enhanced Cycle Generative Adversarial Network Approach for Nighttime Pineapple Detection of Automated Harvesting Robots
by Fengyun Wu, Rong Zhu, Fan Meng, Jiajun Qiu, Xiaopei Yang, Jinhui Li and Xiangjun Zou
Agronomy 2024, 14(12), 3002; https://doi.org/10.3390/agronomy14123002 - 17 Dec 2024
Viewed by 331
Abstract
Nighttime pineapple detection for automated harvesting robots is a significant challenge in intelligent agriculture. As a crucial component of robotic vision systems, accurate fruit detection is essential for round-the-clock operations. The study compared advanced end-to-end style transfer models, including U-GAT-IT, SCTNet, and CycleGAN, [...] Read more.
Nighttime pineapple detection for automated harvesting robots is a significant challenge in intelligent agriculture. As a crucial component of robotic vision systems, accurate fruit detection is essential for round-the-clock operations. The study compared advanced end-to-end style transfer models, including U-GAT-IT, SCTNet, and CycleGAN, finding that CycleGAN produced relatively good-quality images but had issues such as the inadequate restoration of nighttime details, color distortion, and artifacts. Therefore, this study further proposed an enhanced CycleGAN approach to address limited nighttime datasets and poor visibility, combining style transfer with small-sample object detection. The improved model features a novel generator structure with ResNeXtBlocks, an optimized upsampling module, and a hyperparameter optimization strategy. This approach achieves a 29.7% reduction in FID score compared to the original CycleGAN. When applied to YOLOv7-based detection, this method significantly outperforms existing approaches, improving precision, recall, average precision, and F1 score by 13.34%, 45.11%, 56.52%, and 30.52%, respectively. These results demonstrate the effectiveness of our enhanced CycleGAN in expanding limited nighttime datasets and supporting efficient automated harvesting in low-light conditions, contributing to the development of more versatile agricultural robots capable of continuous operation. Full article
Show Figures

Figure 1

13 pages, 6856 KiB  
Article
Mind the Step: An Artificial Intelligence-Based Monitoring Platform for Animal Welfare
by Andrea Michielon, Paolo Litta, Francesca Bonelli, Gregorio Don, Stefano Farisè, Diana Giannuzzi, Marco Milanesi, Daniele Pietrucci, Angelica Vezzoli, Alessio Cecchinato, Giovanni Chillemi, Luigi Gallo, Marcello Mele and Cesare Furlanello
Sensors 2024, 24(24), 8042; https://doi.org/10.3390/s24248042 - 17 Dec 2024
Viewed by 568
Abstract
We present an artificial intelligence (AI)-enhanced monitoring framework designed to assist personnel in evaluating and maintaining animal welfare using a modular architecture. This framework integrates multiple deep learning models to automatically compute metrics relevant to assessing animal well-being. Using deep learning for AI-based [...] Read more.
We present an artificial intelligence (AI)-enhanced monitoring framework designed to assist personnel in evaluating and maintaining animal welfare using a modular architecture. This framework integrates multiple deep learning models to automatically compute metrics relevant to assessing animal well-being. Using deep learning for AI-based vision adapted from industrial applications and human behavioral analysis, the framework includes modules for markerless animal identification and health status assessment (e.g., locomotion score and body condition score). Methods for behavioral analysis are also included to evaluate how nutritional and rearing conditions impact behaviors. These models are initially trained on public datasets and then fine-tuned on original data. We demonstrate the approach through two use cases: a health monitoring system for dairy cattle and a piglet behavior analysis system. The results indicate that scalable deep learning and edge computing solutions can support precision livestock farming by automating welfare assessments and enabling timely, data-driven interventions. Full article
Show Figures

Figure 1

23 pages, 2200 KiB  
Review
Recent Advancements in Artificial Intelligence in Battery Recycling
by Subin Antony Jose, Connor Andrew Dennis Cook, Joseph Palacios, Hyundeok Seo, Christian Eduardo Torres Ramirez, Jinhong Wu and Pradeep L. Menezes
Batteries 2024, 10(12), 440; https://doi.org/10.3390/batteries10120440 - 11 Dec 2024
Viewed by 575
Abstract
Battery recycling has become increasingly crucial in mitigating environmental pollution and conserving valuable resources. As demand for battery-powered devices rises across industries like automotive, electronics, and renewable energy, efficient recycling is essential. Traditional recycling methods, often reliant on manual labor, suffer from inefficiencies [...] Read more.
Battery recycling has become increasingly crucial in mitigating environmental pollution and conserving valuable resources. As demand for battery-powered devices rises across industries like automotive, electronics, and renewable energy, efficient recycling is essential. Traditional recycling methods, often reliant on manual labor, suffer from inefficiencies and environmental harm. However, recent artificial intelligence (AI) advancements offer promising solutions to these challenges. This paper reviews the latest developments in AI applications for battery recycling, focusing on methodologies, challenges, and future directions. AI technologies, particularly machine learning and deep learning models, are revolutionizing battery sorting, classification, and disassembly processes. AI-powered systems enhance efficiency by automating tasks such as battery identification, material characterization, and robotic disassembly, reducing human error and occupational hazards. Additionally, integrating AI with advanced sensing technologies like computer vision, spectroscopy, and X-ray imaging allows for precise material characterization and real-time monitoring, optimizing recycling strategies and material recovery rates. Despite these advancements, data quality, scalability, and regulatory compliance must be addressed to realize AI’s full potential in battery recycling. Collaborative efforts across interdisciplinary domains are essential to develop robust, scalable AI-driven recycling solutions, paving the way for a sustainable, circular economy in battery materials. Full article
(This article belongs to the Special Issue Towards a Smarter Battery Management System: 2nd Edition)
Show Figures

Figure 1

16 pages, 5538 KiB  
Article
Vision-Based Acquisition Model for Molten Pool and Weld-Bead Profile in Gas Metal Arc Welding
by Gwang-Gook Kim, Dong-Yoon Kim and Jiyoung Yu
Metals 2024, 14(12), 1413; https://doi.org/10.3390/met14121413 - 10 Dec 2024
Viewed by 494
Abstract
Gas metal arc welding (GMAW) is widely used for its productivity and ease of automation across various industries. However, certain tasks in shipbuilding and heavy industry still require manual welding, where quality depends heavily on operator skill. Defects in manual welding often necessitate [...] Read more.
Gas metal arc welding (GMAW) is widely used for its productivity and ease of automation across various industries. However, certain tasks in shipbuilding and heavy industry still require manual welding, where quality depends heavily on operator skill. Defects in manual welding often necessitate costly rework, reducing productivity. Vision sensing has become essential in automated welding, capturing dynamic changes in the molten pool and arc length for real-time defect insights. Laser vision sensors are particularly valuable for their high-precision bead profile data; however, most current models require offline inspection, limiting real-time application. This study proposes a deep learning-based system for the real-time monitoring of both the molten pool and weld-bead profile during GMAW. The system integrates an optimized optical design to reduce arc light interference, enabling the continuous acquisition of both molten pool images and 3D bead profiles. Experimental results demonstrate that the molten pool classification models achieved accuracies of 99.76% with ResNet50 and 99.02% with MobileNetV4, fulfilling real-time requirements with inference times of 6.53 ms and 9.06 ms, respectively. By combining 2D and 3D data through a semantic segmentation algorithm, the system enables the accurate, real-time extraction of weld-bead geometry, offering comprehensive weld quality monitoring that satisfies the performance demands of real-time industrial applications. Full article
(This article belongs to the Special Issue Welding and Fatigue of Metallic Materials)
Show Figures

Figure 1

23 pages, 7207 KiB  
Article
Research on Pork Cut and Freshness Determination Method Based on Computer Vision
by Shihao Song, Qiqi Guo, Xiaosa Duan, Xiaojing Shi and Zhenyu Liu
Foods 2024, 13(24), 3986; https://doi.org/10.3390/foods13243986 - 10 Dec 2024
Viewed by 452
Abstract
With the increasing importance of meat quality inspection, traditional manual evaluation methods face challenges in terms of efficiency and accuracy. To improve the precision and efficiency of pork quality assessment, an automated detection method based on computer vision technology is proposed for evaluating [...] Read more.
With the increasing importance of meat quality inspection, traditional manual evaluation methods face challenges in terms of efficiency and accuracy. To improve the precision and efficiency of pork quality assessment, an automated detection method based on computer vision technology is proposed for evaluating different parts and freshness of pork. First, high-resolution cameras were used to capture image data of Jinfen white pigs, covering three pork cuts—hind leg, loin, and belly—across three different collection times. These three parts were categorized into nine datasets, and the sample set was expanded through digital image processing techniques. Next, five convolutional neural network models—VGGNet, ResNet, DenseNet, MobileNet, and EfficientNet—were selected for feature recognition experiments. The experimental results showed that the MobileNetV3_Small model achieved an accuracy of 98.59%, outperforming other classical network architectures while being more lightweight. Further statistical analysis revealed that the p-values for ResNet101, EfficientNetB0, and EfficientNetB1 were all greater than 0.05, indicating that the performance differences between these models and MobileNetV3_Small were not statistically significant. In contrast, other models showed significant performance differences (p-value < 0.05). Finally, based on the PYQT5 framework, the MobileNetV3_Small model was deployed on a local client, realizing an efficient and accurate end-to-end automatic recognition system. These findings can be used to effectively enhance the efficiency and reliability of pork quality detection, providing a solid foundation for the development of pork safety monitoring systems. Full article
(This article belongs to the Section Food Analytical Methods)
Show Figures

Figure 1

16 pages, 4778 KiB  
Article
Automating Quality Control on a Shoestring, a Case Study
by Hang Sun, Wei-Ting Teo, Kenji Wong, Botao Dong, Jan Polzer and Xun Xu
Machines 2024, 12(12), 904; https://doi.org/10.3390/machines12120904 - 10 Dec 2024
Viewed by 500
Abstract
Dependence on manual inspections for quality control often results in errors, especially after prolonged periods of work that heighten the risk of missed defects. There is no shortage of expensive commercial inspection systems that can carry out the quality control work satisfactorily. However, [...] Read more.
Dependence on manual inspections for quality control often results in errors, especially after prolonged periods of work that heighten the risk of missed defects. There is no shortage of expensive commercial inspection systems that can carry out the quality control work satisfactorily. However, small to medium-sized enterprises (SMEs) often face challenges in adopting these new systems for their production workflows because of the associated integration risks, high cost, and skill complexity. To address these issues, a portable, cost-effective, and automated quality inspection system was developed as an introductory tool for SMEs. Leveraging computer vision, 3D-printed mechanical parts, and accessible components, this system offers a 360-degree inspection of production line products, enabling SMEs to explore automation with minimal investment. It features a brief training phase using a few defect-free parts to reduce the skill barrier, thus helping SMEs to transition towards smart manufacturing. These help to address the main technology adoption barriers of cost, risk, and complexity. The system’s performance was validated through repeated testing on a large sheet metal chassis installed in uninterruptible power supplies (UPS), confirming its effectiveness as a steppingstone toward more advanced smart manufacturing solutions. Full article
Show Figures

Figure 1

16 pages, 8265 KiB  
Article
Robotized 3D Scanning and Alignment Method for Dimensional Qualification of Big Parts Printed by Material Extrusion
by Juan Carlos Antolin-Urbaneja, Rakel Pacheco Goñi, Nerea Alberdi Olaizola and Ana Isabel Luengo Pizarro
Robotics 2024, 13(12), 175; https://doi.org/10.3390/robotics13120175 - 10 Dec 2024
Viewed by 544
Abstract
Moulds for aeronautical applications must fulfil highly demanding requirements, including the geometrical tolerances before and after curing cycles at high temperatures and pressures. The growing availability of thermoplastic materials printed by material extrusion systems requires research to verify the geometrical accuracy after three-dimensional [...] Read more.
Moulds for aeronautical applications must fulfil highly demanding requirements, including the geometrical tolerances before and after curing cycles at high temperatures and pressures. The growing availability of thermoplastic materials printed by material extrusion systems requires research to verify the geometrical accuracy after three-dimensional printing processes to assess whether the part can meet the required geometry through milling processes. In this sense, the application of automated techniques to assess quick and reliable measurements is an open point under this promising technology. This work investigates the integration of a 3D vision system using a structured-light 3D scanner, placed onto an industrial robot in an eye-in-hand configuration and synchronized by a computer. The complete system validates an in-house algorithm, which inspects the whole reconstructed part, acquiring several views from different poses, and makes the alignment with the theoretical model of the geometry of big parts manufactured by 3D printing. Moreover, the automation of the validation process for the manufactured parts using contactless detection of the offset-printed material can be used to define milling strategies to achieve the geometric qualifications. The algorithm was tested using several parts printed by the material extrusion of a thermoplastic material based on black polyamide 6 reinforced with short carbon fibres. The complete inspection process was performed in 38 s in the three studied cases. The results assure that more than 95.50% of the evaluated points of each reconstructed point cloud differed by more than one millimetre from the theoretical model. Full article
(This article belongs to the Section Industrial Robots and Automation)
Show Figures

Figure 1

19 pages, 30513 KiB  
Article
From Detection to Action: A Multimodal AI Framework for Traffic Incident Response
by Afaq Ahmed, Muhammad Farhan, Hassan Eesaar, Kil To Chong and Hilal Tayara
Viewed by 858
Abstract
With the rising incidence of traffic accidents and growing environmental concerns, the demand for advanced systems to ensure traffic and environmental safety has become increasingly urgent. This paper introduces an automated highway safety management framework that integrates computer vision and natural language processing [...] Read more.
With the rising incidence of traffic accidents and growing environmental concerns, the demand for advanced systems to ensure traffic and environmental safety has become increasingly urgent. This paper introduces an automated highway safety management framework that integrates computer vision and natural language processing for real-time monitoring, analysis, and reporting of traffic incidents. The system not only identifies accidents but also aids in coordinating emergency responses, such as dispatching ambulances, fire services, and police, while simultaneously managing traffic flow. The approach begins with the creation of a diverse highway accident dataset, combining public datasets with drone and CCTV footage. YOLOv11s is retrained on this dataset to enable real-time detection of critical traffic elements and anomalies, such as collisions and fires. A vision–language model (VLM), Moondream2, is employed to generate detailed scene descriptions, which are further refined by a large language model (LLM), GPT 4-Turbo, to produce concise incident reports and actionable suggestions. These reports are automatically sent to relevant authorities, ensuring prompt and effective response. The system’s effectiveness is validated through the analysis of diverse accident videos and zero-shot simulation testing within the Webots environment. The results highlight the potential of combining drone and CCTV imagery with AI-driven methodologies to improve traffic management and enhance public safety. Future work will include refining detection models, expanding dataset diversity, and deploying the framework in real-world scenarios using live drone and CCTV feeds. This study lays the groundwork for scalable and reliable solutions to address critical traffic safety challenges. Full article
Show Figures

Figure 1

20 pages, 8275 KiB  
Article
Automated Visual Inspection for Precise Defect Detection and Classification in CBN Inserts
by Li Zeng, Feng Wan, Baiyun Zhang and Xu Zhu
Sensors 2024, 24(23), 7824; https://doi.org/10.3390/s24237824 - 7 Dec 2024
Viewed by 693
Abstract
In the high-stakes domain of precision manufacturing, Cubic Boron Nitride (CBN) inserts are pivotal for their hardness and durability. However, post-production surface defects on these inserts can compromise product integrity and performance. This paper proposes an automated detection and classification system using machine [...] Read more.
In the high-stakes domain of precision manufacturing, Cubic Boron Nitride (CBN) inserts are pivotal for their hardness and durability. However, post-production surface defects on these inserts can compromise product integrity and performance. This paper proposes an automated detection and classification system using machine vision to scrutinize these surface defects. By integrating an optical bracket, a high-resolution industrial camera, precise lighting, and an advanced development board, the system employs digital image processing to ascertain and categorize imperfections on CBN inserts. The methodology initiates with a high-definition image capture by the imaging platform, tailored for CBN insert inspection. A suite of defect detection algorithms undergoes comparative analysis to discern their efficacy, emphasizing the impact of algorithm parameters and dataset diversity on detection precision. The most effective algorithm is then encapsulated into a versatile application, ensuring compatibility with various operating systems. Empirical verification of the system shows that the detection accuracy of multiple defect types exceeds 90%, and the tooth surface recognition efficiency significantly reaches three frames per second, with the front and side cutting surfaces of the tool in each frame. This breakthrough indicates a scalable, reliable solution for automatically detecting and classifying surface defects on CBN inserts, paving the way for enhanced quality control in automated, high-speed production lines. Full article
(This article belongs to the Special Issue Dalian University of Technology Celebrating 75th Anniversary)
Show Figures

Figure 1

Back to TopTop