Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,722)

Search Parameters:
Keywords = autonomous driving

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 5327 KiB  
Article
Using a YOLO Deep Learning Algorithm to Improve the Accuracy of 3D Object Detection by Autonomous Vehicles
by Ramavhale Murendeni, Alfred Mwanza and Ibidun Christiana Obagbuwa
World Electr. Veh. J. 2025, 16(1), 9; https://doi.org/10.3390/wevj16010009 (registering DOI) - 27 Dec 2024
Abstract
This study presents an adaptation of the YOLOv4 deep learning algorithm for 3D object detection, addressing a critical challenge in autonomous vehicle (AV) systems: accurate real-time perception of the surrounding environment in three dimensions. Traditional 2D detection methods, while efficient, fall short in [...] Read more.
This study presents an adaptation of the YOLOv4 deep learning algorithm for 3D object detection, addressing a critical challenge in autonomous vehicle (AV) systems: accurate real-time perception of the surrounding environment in three dimensions. Traditional 2D detection methods, while efficient, fall short in providing the depth and spatial information necessary for safe navigation. This research modifies the YOLOv4 architecture to predict 3D bounding boxes, object depth, and orientation. Key contributions include introducing a multi-task loss function that optimizes 2D and 3D predictions and integrating sensor fusion techniques that combine RGB camera data with LIDAR point clouds for improved depth estimation. The adapted model, tested on real-world datasets, demonstrates a significant increase in 3D detection accuracy, achieving a mean average precision (mAP) of 85%, intersection over union (IoU) of 78%, and near real-time performance at 93–97% for detecting vehicles and 75–91% for detecting people. This approach balances high detection accuracy and real-time processing, making it highly suitable for AV applications. This study advances the field by showing how an efficient 2D detector can be extended to meet the complex demands of 3D object detection in real-world driving scenarios without sacrificing computational efficiency. Full article
(This article belongs to the Special Issue Motion Planning and Control of Autonomous Vehicles)
Show Figures

Figure 1

20 pages, 5345 KiB  
Article
Enhancing Autonomous Driving in Urban Scenarios: A Hybrid Approach with Reinforcement Learning and Classical Control
by Rodrigo Gutiérrez-Moreno, Rafael Barea, Elena López-Guillén, Felipe Arango, Fabio Sánchez-García and Luis M. Bergasa
Sensors 2025, 25(1), 117; https://doi.org/10.3390/s25010117 (registering DOI) - 27 Dec 2024
Abstract
The use of DL algorithms in the domain of DM for AVs has garnered significant attention in the literature in recent years, showcasing considerable potential. Nevertheless, most of the solutions proposed by the scientific community encounter difficulties in real-world applications. This paper aims [...] Read more.
The use of DL algorithms in the domain of DM for AVs has garnered significant attention in the literature in recent years, showcasing considerable potential. Nevertheless, most of the solutions proposed by the scientific community encounter difficulties in real-world applications. This paper aims to provide a realistic implementation of a hybrid DM module in an AD stack, integrating the learning capabilities from the experience of DRL algorithms and the reliability of classical methodologies. Our DM system is in charge of generating steering and velocity signals using the HD map information and sensors pre-processed data. This work encompasses the implementation of concatenated scenarios in simulated environments, and the integration of AD modules. Specifically, the authors address the DM problem by employing a POMDP formulation and offer a solution through the use of DRL algorithms. Furthermore, an additional control module to execute the decisions in a safe and comfortable way through a hybrid architecture is presented. The proposed architecture is validated in the CARLA simulator by navigating through multiple concatenated scenarios, outperforming the CARLA Autopilot in terms of completion time, while ensuring both safety and comfort. Full article
(This article belongs to the Special Issue Advances in Sensing, Imaging and Computing for Autonomous Driving)
16 pages, 27953 KiB  
Article
Query-Based Instance Segmentation with Dual Attention Transformer for Autonomous Vehicles
by Aya Taourirte and Li-Hong Juang
World Electr. Veh. J. 2025, 16(1), 8; https://doi.org/10.3390/wevj16010008 (registering DOI) - 27 Dec 2024
Abstract
Applications such as autonomous driving demand real-time and high-precision instance segmentation to accurately identify and understand objects in an environment, including pedestrians, vehicles, and traffic signs. Ensuring a balance between accuracy and efficiency in instance segmentation systems is critical for such tasks. Traditional [...] Read more.
Applications such as autonomous driving demand real-time and high-precision instance segmentation to accurately identify and understand objects in an environment, including pedestrians, vehicles, and traffic signs. Ensuring a balance between accuracy and efficiency in instance segmentation systems is critical for such tasks. Traditional convolutional models face limitations in capturing complex features and global context effectively. To address these challenges, we propose an enhanced QueryInst-based instance segmentation framework. First, we replace the traditional CNN backbone with the DaViT Transformer to extract richer, multi-scale features. Next, we integrate Feature Pyramid Network CARAFE to capture global context and recover missed instances. Finally, we incorporate the Complete IoU (CIoU) loss function to optimize object localization and improve prediction accuracy. Experiments on the Cityscapes and COCO datasets demonstrate that our approach achieves mIoU scores of 46.7% and AP score of 45.5%, representing improvements of 6.1% and 2.6% over the baseline, respectively, outperforming other state-of-the-art methods. Full article
22 pages, 22974 KiB  
Article
EOR: An Enhanced Object Registration Method for Visual Images and High-Definition Maps
by Nian Hui, Zijie Jiang, Zhongliang Cai and Shen Ying
Remote Sens. 2025, 17(1), 66; https://doi.org/10.3390/rs17010066 (registering DOI) - 27 Dec 2024
Abstract
Accurate object registration is crucial for precise localization and environment sensing in autonomous driving systems. While real-time sensors such as cameras and radar capture the local environment, high-definition (HD) maps provide a global reference frame that enhances localization accuracy and robustness, especially in [...] Read more.
Accurate object registration is crucial for precise localization and environment sensing in autonomous driving systems. While real-time sensors such as cameras and radar capture the local environment, high-definition (HD) maps provide a global reference frame that enhances localization accuracy and robustness, especially in complex scenarios. In this paper, we propose an innovative method called enhanced object registration (EOR) to improve the accuracy and robustness of object registration between camera images and HD maps. Our research investigates the influence of spatial distribution factors and spatial structural characteristics of objects in visual perception and HD maps on registration accuracy and robustness. We specifically focus on understanding the varying importance of different object types and the constrained dimensions of pose estimation. These factors are integrated into a nonlinear optimization model and extended Kalman filter framework. Through comprehensive experimentation on the open-source Argoverse 2 dataset, the proposed EOR demonstrates the ability to maintain high registration accuracy in lateral and elevation dimensions, improve longitudinal accuracy, and increase the probability of successful registration. These findings contribute to a deeper understanding of the relationship between sensing data and scenario understanding in object registration for vehicle localization. Full article
Show Figures

Figure 1

20 pages, 3233 KiB  
Article
Preemptive-Level-Based Cooperative Autonomous Vehicle Trajectory Optimization for Unsignalized Intersection with Mixed Traffic
by Pengrui Li, Miaomiao Liu, Mingyue Zhu and Minkun Yao
Abstract
Buses constitute a crucial component of public transportation systems in numerous urban centers. Integrating autonomous driving technology into the bus transportation ecosystem has the potential to enhance overall urban mobility. The management of mixed traffic at intersections, involving both private vehicles and buses, [...] Read more.
Buses constitute a crucial component of public transportation systems in numerous urban centers. Integrating autonomous driving technology into the bus transportation ecosystem has the potential to enhance overall urban mobility. The management of mixed traffic at intersections, involving both private vehicles and buses, particularly in the presence of bus lanes, presents several formidable challenges. This study proposes a preemptive-level-based cooperative autonomous vehicle (AV) trajectory optimization for intersections with mixed traffic. It takes into account dynamic changes in the intersection’s passing sequence, trajectory selection, and adherence to traffic regulations, including the different status of bus lanes. Based on the spatio–temporal coupling constraints of each vehicle trajectory at intersections, a preemptive-level-based AV passing order optimization method is proposed. Subsequently, a speed control mechanism is introduced to decouple these constraints, thereby preventing vehicle conflicts and reducing unnecessary braking. Ultimately, trajectory routes for multi-exit roads are selected, prioritizing traffic efficiency. In simulated validations, two representative types of intersections from the actual road network were selected, and eight typical scenarios established, including the operation status of bus lanes and different percentages of buses. The results indicate that the proposed method improves intersection traffic efficiency by a minimum of 12.55%, accompanied significantly by reduction of fuel consumption by 8.93%. This study verified that the proposed method significantly enhances intersection efficiency and reduces energy consumption while ensuring safety. Full article
Show Figures

Figure 1

28 pages, 1565 KiB  
Article
Promoting Sustainable Transportation: How People Trust and Accept Autonomous Vehicles—Focusing on the Different Levels of Collaboration Between Human Drivers and Artificial Intelligence—An Empirical Study with Partial Least Squares Structural Equation Modeling and Multi-Group Analysis
by Yi Yang and Min-Yong Kim
Sustainability 2025, 17(1), 125; https://doi.org/10.3390/su17010125 - 27 Dec 2024
Abstract
Despite the advancement in autonomous vehicles, public trust and acceptance are crucial for AV’s widespread adoption. This study examines how different collaboration levels between human drivers and artificial intelligence influence users’ trust and acceptance of AVs. Using an extended Technology Acceptance Model, this [...] Read more.
Despite the advancement in autonomous vehicles, public trust and acceptance are crucial for AV’s widespread adoption. This study examines how different collaboration levels between human drivers and artificial intelligence influence users’ trust and acceptance of AVs. Using an extended Technology Acceptance Model, this study incorporates psychological factors and technological attitudes such as perceived safety, perceived risk, AI literacy, and AI technophobia. Data collected from 392 vehicle owners across 11 Chinese cities were analyzed using Partial Least Squares Structural Equation Modeling and Multi-Group Analysis. The findings reveal that at the fully manual level, perceived ease of use significantly influences perceived usefulness, while trust remains grounded in mechanical reliability rather than AI systems. In contrast, as AI assumes driving responsibilities at collaborative automation levels, the findings show that AI literacy significantly increases perceived trust and ease of use, while AI technophobia decreases them, with these effects varying across different driving automation levels. As AI takes on greater driving responsibilities, perceived ease of use becomes less critical, and perceived trust increasingly influences users’ acceptance. These findings highlight the need for targeted public education and phased automation strategies, offering guidance for AV developers to address user concerns and build trust in autonomous technologies. By enhancing public trust and acceptance, this study contributes to sustainable development by promoting safer roads and enabling more efficient, resource-conscious transportation systems. Gradually integrating AVs into urban mobility also supports smart city initiatives, fostering more sustainable urban environments. Full article
Show Figures

Figure 1

27 pages, 310 KiB  
Article
Data Security in Autonomous Driving: Multifaceted Challenges of Technology, Law, and Social Ethics
by Yao Xu, Jixin Wei, Ting Mi and Zhihua Chen
World Electr. Veh. J. 2025, 16(1), 6; https://doi.org/10.3390/wevj16010006 - 27 Dec 2024
Abstract
The widespread adoption of autonomous driving technology heavily relies on data acquisition and processing, which, while providing an intelligent experience for users, also raises concerns about data security, personal privacy, and data exploitation. The data security of autonomous driving faces challenges from three [...] Read more.
The widespread adoption of autonomous driving technology heavily relies on data acquisition and processing, which, while providing an intelligent experience for users, also raises concerns about data security, personal privacy, and data exploitation. The data security of autonomous driving faces challenges from three aspects: technology, law, and social ethics. Thus, this article adopts interdisciplinary research methods to identify these challenges and provide solutions from diverse disciplinary perspectives. (a) Technologically, issues such as data leakage, storage vulnerabilities, and the risk of re-identifying anonymous data persist; (b) legally, there is an urgent need to clarify the responsible parties and address issues related to outdated data security legislation and legal conflicts arising from cross-border data flows; (c) socially and ethically, the risks of data misuse and the emergence of exploitative contracts have triggered public concerns about data privacy. To address these challenges, this article proposes technical countermeasures such as utilizing diverse Privacy Enhancing Technologies (PETs) to enhance data anonymity, optimizing data encryption techniques, and reinforcing data monitoring and access control management. Legal measures should include establishing a comprehensive data security protection framework, clarifying accountability, and developing standards for the classification and grading of autonomous vehicle data. In the field of social ethics, emphasis is placed on safeguarding the public’s right to know, establishing a transparent system for data use, offering an alternative “data security” solution that allows users to choose between heightened privacy protection and enhanced personalized services, and also advocating ethical data utilization and technological development. By implementing these comprehensive strategies, we aim to establish a secure and barrier-free data protection system for autonomous driving, thereby laying a solid foundation for the widespread adoption of autonomous driving technology. Full article
18 pages, 2652 KiB  
Article
EdgeNet: An End-to-End Deep Neural Network Pretrained with Synthetic Data for a Real-World Autonomous Driving Application
by Leanne Miller, Pedro J. Navarro and Francisca Rosique
Sensors 2025, 25(1), 89; https://doi.org/10.3390/s25010089 - 27 Dec 2024
Viewed by 117
Abstract
This paper presents a novel end-to-end architecture based on edge detection for autonomous driving. The architecture has been designed to bridge the domain gap between synthetic and real-world images for end-to-end autonomous driving applications and includes custom edge detection layers before the Efficient [...] Read more.
This paper presents a novel end-to-end architecture based on edge detection for autonomous driving. The architecture has been designed to bridge the domain gap between synthetic and real-world images for end-to-end autonomous driving applications and includes custom edge detection layers before the Efficient Net convolutional module. To train the architecture, RGB and depth images were used together with inertial data as inputs to predict the driving speed and steering wheel angle. To pretrain the architecture, a synthetic multimodal dataset for autonomous driving applications was created. The dataset includes driving data from 100 diverse weather and traffic scenarios, gathered from multiple sensors including cameras and an IMU as well as from vehicle control variables. The results show that including edge detection layers in the architecture improves performance for transfer learning when using synthetic and real-world data. In addition, pretraining with synthetic data reduces training time and enhances model performance when using real-world data. Full article
Show Figures

Figure 1

19 pages, 7424 KiB  
Article
Residual Vision Transformer and Adaptive Fusion Autoencoders for Monocular Depth Estimation
by Wei-Jong Yang, Chih-Chen Wu and Jar-Ferr Yang
Sensors 2025, 25(1), 80; https://doi.org/10.3390/s25010080 - 26 Dec 2024
Viewed by 189
Abstract
Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new [...] Read more.
Precision depth estimation plays a key role in many applications, including 3D scene reconstruction, virtual reality, autonomous driving and human–computer interaction. Through recent advancements in deep learning technologies, monocular depth estimation, with its simplicity, has surpassed the traditional stereo camera systems, bringing new possibilities in 3D sensing. In this paper, by using a single camera, we propose an end-to-end supervised monocular depth estimation autoencoder, which contains an encoder with a structure with a mixed convolution neural network and vision transformers and an effective adaptive fusion decoder to obtain high-precision depth maps. In the encoder, we construct a multi-scale feature extractor by mixing residual configurations of vision transformers to enhance both local and global information. In the adaptive fusion decoder, we introduce adaptive fusion modules to effectively merge the features of the encoder and the decoder together. Lastly, the model is trained using a loss function that aligns with human perception to enable it to focus on the depth values of foreground objects. The experimental results demonstrate the effective prediction of the depth map from a single-view color image by the proposed autoencoder, which increases the first accuracy rate about 28% and reduces the root mean square error about 27% compared to an existing method in the NYU dataset. Full article
Show Figures

Figure 1

22 pages, 6720 KiB  
Article
Gridless DOA Estimation with Extended Array Aperture in Automotive Radar Applications
by Pengyu Jiang, Silin Gao, Jie Zhao, Zhe Zhang and Bingchen Zhang
Remote Sens. 2025, 17(1), 33; https://doi.org/10.3390/rs17010033 - 26 Dec 2024
Viewed by 196
Abstract
Millimeter-wave automotive radar has become an essential tool for autonomous driving, providing reliable sensing capabilities under various environmental conditions. To reduce hardware size and cost, sparse arrays are widely employed in automotive radar systems. Additionally, because the targets detected by automotive radar typically [...] Read more.
Millimeter-wave automotive radar has become an essential tool for autonomous driving, providing reliable sensing capabilities under various environmental conditions. To reduce hardware size and cost, sparse arrays are widely employed in automotive radar systems. Additionally, because the targets detected by automotive radar typically exhibit sparsity, compressed sensing-based algorithms have been utilized for sparse array reconstruction, achieving superior performance. However, traditional compressed sensing algorithms generally assume that targets are located on a finite set of grid points and perform sparse reconstruction based on predefined grids. When targets are off-grid, significant off-grid errors can occur. To address this issue, we propose an automotive radar sparse reconstruction algorithm based on accelerated Atomic Norm Minimization (ANM). By using the Iterative Vandermonde Decomposition and Shrinkage Threshold (IVDST) algorithm, we can achieve fast ANM, which effectively mitigates off-grid errors while reducing reconstruction complexity. Furthermore, we adopt a Generalized Likelihood Ratio Test (GLRT) detector to eliminate noise and clutter in the automotive radar operating environment. Simulation results show that our proposed algorithm significantly improves reconstruction accuracy compared to the iterative soft threshold (IST) algorithm while maintaining the same computational complexity. The effectiveness of the proposed algorithm in practical applications is further validated through real-world data experiments, demonstrating its superior capability in clutter elimination. Full article
(This article belongs to the Topic Multi-Sensor Integrated Navigation Systems)
Show Figures

Graphical abstract

24 pages, 4109 KiB  
Article
AI-Based Malicious Encrypted Traffic Detection in 5G Data Collection and Secure Sharing
by Gang Han, Haohe Zhang, Zhongliang Zhang, Yan Ma and Tiantian Yang
Viewed by 202
Abstract
With the development and widespread application of network information, new technologies led by 5G are emerging, resulting in an increasingly complex network security environment and more diverse attack methods. Unlike traditional networks, 5G networks feature higher connection density, faster data transmission speeds, and [...] Read more.
With the development and widespread application of network information, new technologies led by 5G are emerging, resulting in an increasingly complex network security environment and more diverse attack methods. Unlike traditional networks, 5G networks feature higher connection density, faster data transmission speeds, and lower latency, which are widely applied in scenarios such as smart cities, the Internet of Things, and autonomous driving. The vast amounts of sensitive data generated by these applications become primary targets during the processes of collection and secure sharing, and unauthorized access or tampering could lead to severe data breaches and integrity issues. However, as 5G networks extensively employ encryption technologies to protect data transmission, attackers can hide malicious content within encrypted communication, rendering traditional content-based traffic detection methods ineffective for identifying malicious encrypted traffic. To address this challenge, this paper proposes a malicious encrypted traffic detection method based on reconstructive domain adaptation and adversarial hybrid neural networks. The proposed method integrates generative adversarial networks with ResNet, ResNeXt, and DenseNet to construct an adversarial hybrid neural network, aiming to tackle the challenges of encrypted traffic detection. On this basis, a reconstructive domain adaptation module is introduced to reduce the distribution discrepancy between the source domain and the target domain, thereby enhancing cross-domain detection capabilities. By preprocessing traffic data from public datasets, the proposed method is capable of extracting deep features from encrypted traffic without the need for decryption. The generator utilizes the adversarial hybrid neural network module to generate realistic malicious encrypted traffic samples, while the discriminator achieves sample classification through high-dimensional feature extraction. Additionally, the domain classifier within the reconstructive domain adaptation module further improves the model’s stability and generalization across different network environments and time periods. Experimental results demonstrate that the proposed method significantly improves the accuracy and efficiency of malicious encrypted traffic detection in 5G network environments, effectively enhancing the detection performance of malicious traffic in 5G networks. Full article
(This article belongs to the Special Issue Novel Methods Applied to Security and Privacy Problems, Volume II)
Show Figures

Figure 1

21 pages, 6133 KiB  
Article
BEV Semantic Map Reconstruction for Self-Driving Cars with the Multi-Head Attention Mechanism
by Yi-Cheng Liao, Jichiang Tsai and Hsuan-Ying Chien
Viewed by 55
Abstract
Environmental perception is crucial for safe autonomous driving, enabling accurate analysis of the vehicle’s surroundings. While 3D LiDAR is traditionally used for 3D environment reconstruction, its high cost and complexity present challenges. In contrast, camera-based cross-view frameworks can offer a cost-effective alternative. Hence, [...] Read more.
Environmental perception is crucial for safe autonomous driving, enabling accurate analysis of the vehicle’s surroundings. While 3D LiDAR is traditionally used for 3D environment reconstruction, its high cost and complexity present challenges. In contrast, camera-based cross-view frameworks can offer a cost-effective alternative. Hence, this manuscript proposes a new cross-view model to extract mapping features from camera images and then transfer them to a Bird’s-Eye View (BEV) map. Particularly, a multi-head attention mechanism in the decoder architecture generates the final semantic map. Each camera learns embedding information corresponding to its position and angle within the BEV map. Cross-view attention fuses information from different perspectives to predict top-down map features enriched with spatial information. The multi-head attention mechanism then globally performs dependency matches, enhancing long-range information and capturing latent relationships between features. Transposed convolution replaces traditional upsampling methods, avoiding high similarities of local features and facilitating semantic segmentation inference of the BEV map. Finally, we conduct numerous simulation experiments to verify the performance of our cross-view model. Full article
(This article belongs to the Special Issue Advancement on Smart Vehicles and Smart Travel)
Show Figures

Figure 1

25 pages, 9789 KiB  
Article
Comparing User Acceptance in Human–Machine Interfaces Assessments of Shared Autonomous Vehicles: A Standardized Test Procedure
by Ming Yan, Lucia Rampino and Giandomenico Caruso
Appl. Sci. 2025, 15(1), 45; https://doi.org/10.3390/app15010045 - 25 Dec 2024
Viewed by 20
Abstract
Human–Machine Interfaces (HMIs) in autonomous driving technology have recently gained significant research interest in public transportation. However, most of the studies are biased towards qualitative methods, while combining quantitative and qualitative approaches has yet to receive commensurate attention in measuring user acceptance of [...] Read more.
Human–Machine Interfaces (HMIs) in autonomous driving technology have recently gained significant research interest in public transportation. However, most of the studies are biased towards qualitative methods, while combining quantitative and qualitative approaches has yet to receive commensurate attention in measuring user acceptance of design outcome evaluation. To the best of our knowledge, no standardized test procedure that combines quantitative and qualitative methods has been formed to evaluate and compare the interrelationships between different designs of HMIs and their psychological effects on users. This paper proposes a practical and comprehensive protocol to guide assessments of user acceptance of HMI design solutions. We first defined user acceptance and analyzed the existing evaluation methods. Then, specific ergonomic factors and requirements that the designed output HMI should meet were identified. Based on this, we developed a protocol to evaluate a particular HMI solution from in- and out-of-vehicle perspectives. Our theoretical protocol combines objective and subjective measures to compare users’ behavior when interacting with Autonomous Vehicles (AVs) in a virtual experimental environment, especially in public transportation. Standardized testing procedures provide researchers and interaction designers with a practical framework and offer theoretical support for subsequent studies. Full article
(This article belongs to the Special Issue Advances in Autonomous Driving and Smart Transportation)
Show Figures

Figure 1

21 pages, 4001 KiB  
Article
Exponential Trajectory Tracking Control of Nonholonomic Wheeled Mobile Robots
by Plamen Petrov and Ivan Kralov
Mathematics 2025, 13(1), 1; https://doi.org/10.3390/math13010001 - 24 Dec 2024
Viewed by 18
Abstract
Trajectory tracking control is important in order to realize autonomous driving of mobile robots. From a control standpoint, trajectory tracking can be stated as the problem of stabilizing a tracking error system that describes both position and orientation errors of the mobile robot [...] Read more.
Trajectory tracking control is important in order to realize autonomous driving of mobile robots. From a control standpoint, trajectory tracking can be stated as the problem of stabilizing a tracking error system that describes both position and orientation errors of the mobile robot with respect to a time-parameterized path. In this paper, we address the problem for the trajectory tracking of nonholonomic wheeled mobile robots, and an exponential trajectory tracking controller is designed. The stability analysis is concerned with studying the local exponential stability property of a cascade system, provided that two isolated subsystems are exponentially stable and under certain bound conditions for the interconnection term. A theoretical stability analysis of the dynamic behaviors of the closed-loop system is provided based on the Lyapunov stability theory, and an exponential stability result is proven. An explicit estimate of the set of feasible initial conditions for the error variables is determined. Simulation results for verification of the proposed tracking controller under different operating conditions are given. The obtained results show that the problem of trajectory tracking control of nonholonomic wheeled mobile robots is solved over a large class of reference trajectories with fast convergence and good transient performance. Full article
(This article belongs to the Special Issue Advanced Control Theory in Robot System)
Show Figures

Figure 1

13 pages, 3378 KiB  
Article
Research on Improved YOLOv7 for Traffic Obstacle Detection
by Yifan Yang, Song Cui, Xuan Xiang, Yuxing Bai, Liguo Zang and Hongshan Ding
World Electr. Veh. J. 2025, 16(1), 1; https://doi.org/10.3390/wevj16010001 - 24 Dec 2024
Viewed by 145
Abstract
Object detection and recognition algorithms are widely used in applications such as real-time monitoring and autonomous driving. However, there is limited research on traffic obstacle detection in complex scenarios involving road construction and sudden accidents. This gap results in low accuracy and difficulties [...] Read more.
Object detection and recognition algorithms are widely used in applications such as real-time monitoring and autonomous driving. However, there is limited research on traffic obstacle detection in complex scenarios involving road construction and sudden accidents. This gap results in low accuracy and difficulties in recognizing occluded targets, thereby hindering the further development and widespread adoption of intelligent transportation systems. To address these issues, this paper proposes an improved algorithm based on YOLOv7, incorporating a lightweight coordinate attention mechanism to focus on small objects at long distances and capture target location information. The use of a high receptive field enhances the feature hierarchy within the detection network. Additionally, we introduce the focal efficient intersection over union loss function to address sample imbalance, which accelerates the model’s convergence speed, reduces loss values, and improves overall model stability. Our model achieved a detection accuracy of 98.1%, reflecting a 1.4% increase, while also enhancing detection speed and minimizing missed detections. These advancements significantly bolster the model’s performance, demonstrating advantages for real-world applications. Full article
(This article belongs to the Special Issue Research on Intelligent Vehicle Path Planning Algorithm)
Show Figures

Figure 1

Back to TopTop