Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,756)

Search Parameters:
Keywords = open-source software

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 918 KiB  
Article
Analyzing Key Features of Open Source Software Survivability with Random Forest
by Sohee Park and Gihwon Kwon
Appl. Sci. 2025, 15(2), 946; https://doi.org/10.3390/app15020946 (registering DOI) - 18 Jan 2025
Viewed by 355
Abstract
Open source software (OSS) projects rely on voluntary contributions, but their long-term survivability depends on sustained community engagement and effective problem-solving. Survivability, critical for maintaining project quality and trustworthiness, is closely linked to issue activity, as unresolved issues reflect a decline in maintenance [...] Read more.
Open source software (OSS) projects rely on voluntary contributions, but their long-term survivability depends on sustained community engagement and effective problem-solving. Survivability, critical for maintaining project quality and trustworthiness, is closely linked to issue activity, as unresolved issues reflect a decline in maintenance capacity and problem-solving ability. Thus, analyzing issue retention rates provides valuable insights into a project’s health. This study evaluates OSS survivability by identifying the features that influence issue activity and analyzing their relationships with survivability. Kaplan–Meier survival analysis is employed to quantify issue activity and visualize trends in unresolved issue rates, providing a measure of project maintenance dynamics. A random forest model is used to examine the relationships between project features—such as popularity metrics, community engagement, code complexity, and project age—and issue retention rates. The results show that stars significantly reduce issue retention rates, with rates dropping from 0.62 to 0.52 as stars increase to 4000, while larger codebases, higher cyclomatic complexity, and older project age are associated with unresolved issue rates, rising by up to 15%. Forks also have a nonlinear impact, initially stabilizing retention rates but increasing unresolved issues as contributions became unmanageable. By identifying these critical factors and quantifying their impacts, this research offers actionable insights for OSS project managers to enhance project survivability and address key maintenance challenges, ensuring sustainable long-term success. Full article
29 pages, 4950 KiB  
Article
Sustainable Design in Agriculture—Energy Optimization of Solar Greenhouses with Renewable Energy Technologies
by Danijela Nikolić, Saša Jovanović, Nebojša Jurišević, Novak Nikolić, Jasna Radulović, Minja Velemir Radović and Isidora Grujić
Energies 2025, 18(2), 416; https://doi.org/10.3390/en18020416 (registering DOI) - 18 Jan 2025
Viewed by 473
Abstract
In modern agriculture today, the cultivation of agricultural products cannot be imagined without greenhouses. This paper presents an energy optimization of a solar greenhouse with a photovoltaic system (PV) and a ground-source heat pump (GSHP). The PV system generates electricity, while the GSHP [...] Read more.
In modern agriculture today, the cultivation of agricultural products cannot be imagined without greenhouses. This paper presents an energy optimization of a solar greenhouse with a photovoltaic system (PV) and a ground-source heat pump (GSHP). The PV system generates electricity, while the GSHP is used for heating and cooling. A greenhouse is designed with an Open Studio plug-in in the Google SketchUp environment, the EnergyPlus software (8.7.1 version) was used for energy simulation, and the GenOpt software (2.0.0 version) was used for optimization of the azimuth angle and PV cell efficiency. Results for different solar greenhouse orientations and different photovoltaic module efficiency are presented in the paper. The obtained optimal azimuth angle of the solar greenhouse was −8°. With the installation of a PV array with higher module efficiency (20–24%), it is possible to achieve annual energy savings of 6.87–101.77%. Also, with the PV module efficiency of 23.94%, a concept of zero-net-energy solar greenhouses (ZNEG) is achieved at optimal azimuth and slope angle. Through the environmental analysis of different greenhouses, CO2 emissions of PV and GSHP are calculated and compared with electricity usage. Saved CO2 emission for a zero-net-energy greenhouse is 6626 kg CO2/year. An economic analysis of installed renewable energy systems was carried out: with the total investment of 19,326 € for ZNEG, the payback period is 8.63 years. Full article
Show Figures

Figure 1

18 pages, 1820 KiB  
Article
DicomOS: A Preliminary Study on a Linux-Based Operating System Tailored for Medical Imaging and Enhanced Interoperability in Radiology Workflows
by Tiziana Currieri, Orazio Gambino, Roberto Pirrone and Salvatore Vitabile
Electronics 2025, 14(2), 330; https://doi.org/10.3390/electronics14020330 - 15 Jan 2025
Viewed by 369
Abstract
In this paper, we propose a Linux-based operating system, namely, DicomOS, tailored for medical imaging and enhanced interoperability, addressing user-friendly functionality and the main critical needs in radiology workflows. Traditional operating systems in clinical settings face limitations, such as fragmented software ecosystems and [...] Read more.
In this paper, we propose a Linux-based operating system, namely, DicomOS, tailored for medical imaging and enhanced interoperability, addressing user-friendly functionality and the main critical needs in radiology workflows. Traditional operating systems in clinical settings face limitations, such as fragmented software ecosystems and platform-specific restrictions, which disrupt collaborative workflows and hinder diagnostic efficiency. Built on Ubuntu 22.04 LTS, DicomOS integrates essential DICOM functionalities directly into the OS, providing a unified, cohesive platform for image visualization, annotation, and sharing. Methods include custom configurations and the development of graphical user interfaces (GUIs) and command-line tools, making them accessible to medical professionals and developers. Key applications such as ITK-SNAP and 3D Slicer are seamlessly integrated alongside specialized GUIs that enhance usability without requiring extensive technical expertise. As preliminary work, DicomOS demonstrates the potential to simplify medical imaging workflows, reduce cognitive load, and promote efficient data sharing across diverse clinical settings. However, further evaluations, including structured clinical tests and broader deployment with a distributable ISO image, must validate its effectiveness and scalability in real-world scenarios. The results indicate that DicomOS provides a versatile and adaptable solution, supporting radiologists in routine tasks while facilitating customization for advanced users. As an open-source platform, DicomOS has the potential to evolve alongside medical imaging needs, positioning it as a valuable resource for enhancing workflow integration and clinical collaboration. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

16 pages, 1352 KiB  
Article
Quality Education for Sustainable Development: Evolving Pedagogies to Maintain a Balance Between Knowledge, Skills, and Values-Case Study of Saudi Universities
by Fatima Abdelrahman MuhammedZein and Shifan Thaha Abdullateef
Sustainability 2025, 17(2), 635; https://doi.org/10.3390/su17020635 - 15 Jan 2025
Viewed by 389
Abstract
Ozone depletion, global warming, soil degradation, etc., could be, to a great extent, instrumental in making our Earth an unsafe place. Therefore, to prevent further damage, Article 6 of the United Nations Framework Convention on Climate Change (UNFCCC) emphasizes spreading awareness among the [...] Read more.
Ozone depletion, global warming, soil degradation, etc., could be, to a great extent, instrumental in making our Earth an unsafe place. Therefore, to prevent further damage, Article 6 of the United Nations Framework Convention on Climate Change (UNFCCC) emphasizes spreading awareness among the members of the planetary community to protect the planet. The study aims to identify teaching pedagogies that can effectively develop awareness and responsibility among university youth for a sustainable future. The study adopts an exploratory triangulation approach and uses three instruments: a closed-ended questionnaire, a focus group interview, and a comparative performance of control and experimental groups. Fifty-one faculties from two government universities of Saudi Arabia: Qassim University, Qassim, and Prince Sattam bin Abdulaziz University, Alkharj along with 47 students pursuing conversation courses at Level Three in Prince Sattam University participated in the study. JASP 0.9 open-source software was used for statistical analysis. The results revealed that constructivist inquiry-based approaches promoted sustainable development education. Full article
(This article belongs to the Section Sustainable Education and Approaches)
Show Figures

Figure 1

13 pages, 1625 KiB  
Article
MetaboLabPy—An Open-Source Software Package for Metabolomics NMR Data Processing and Metabolic Tracer Data Analysis
by Christian Ludwig
Metabolites 2025, 15(1), 48; https://doi.org/10.3390/metabo15010048 - 14 Jan 2025
Viewed by 366
Abstract
Introduction: NMR spectroscopy is a powerful technique for studying metabolism, either in metabolomics settings or through tracing with stable isotope-enriched metabolic precursors. MetaboLabPy (version 0.9.66) is a free and open-source software package used to process 1D- and 2D-NMR spectra. The software implements a [...] Read more.
Introduction: NMR spectroscopy is a powerful technique for studying metabolism, either in metabolomics settings or through tracing with stable isotope-enriched metabolic precursors. MetaboLabPy (version 0.9.66) is a free and open-source software package used to process 1D- and 2D-NMR spectra. The software implements a complete workflow for NMR data pre-processing to prepare a series of 1D-NMR spectra for multi-variate statistical data analysis. This includes a choice of algorithms for automated phase correction, segmental alignment, spectral scaling, variance stabilisation, export to various software platforms, and analysis of metabolic tracing data. The software has an integrated help system with tutorials that demonstrate standard workflows and explain the capabilities of MetaboLabPy. Materials and Methods: The software is implemented in Python and uses numerous Python toolboxes, such as numpy, scipy, pandas, etc. The software is implemented in three different packages: metabolabpy, qtmetabolabpy, and metabolabpytools. The metabolabpy package contains classes to handle NMR data and all the numerical routines necessary to process and pre-process 1D NMR data and perform multiplet analysis on 2D-1H, 13C HSQC NMR data. The qtmetabolabpy package contains routines related to the graphical user interface. Results: PySide6 is used to produce a modern and user-friendly graphical user interface. The metabolabpytools package contains routines which are not specific to just handling NMR data, for example, routines to derive isotopomer distributions from the combination of NMR multiplet and GC-MS data. A deep-learning approach for the latter is currently under development. MetaboLabPy is available via the Python Package Index or via GitHub. Full article
(This article belongs to the Special Issue Open-Source Software in Metabolomics)
Show Figures

Figure 1

16 pages, 8337 KiB  
Article
Computational Chemistry Study of pH-Responsive Fluorescent Probes and Development of Supporting Software
by Ximeng Zhu, Yongchun Wei and Xiaogang Liu
Molecules 2025, 30(2), 273; https://doi.org/10.3390/molecules30020273 - 12 Jan 2025
Viewed by 420
Abstract
This study employs quantum chemical computational methods to predict the spectroscopic properties of fluorescent probes 2,6-bis(2-benzimidazolyl)pyridine (BBP) and (E)-3-(2-(1H-benzo[d]imidazol-2-yl)vinyl)-9-(2-(2-methoxyethoxy)ethyl)-9H-carbazole (BIMC). Using time-dependent density functional theory (TDDFT), we successfully predicted the fluorescence emission wavelengths of BBP [...] Read more.
This study employs quantum chemical computational methods to predict the spectroscopic properties of fluorescent probes 2,6-bis(2-benzimidazolyl)pyridine (BBP) and (E)-3-(2-(1H-benzo[d]imidazol-2-yl)vinyl)-9-(2-(2-methoxyethoxy)ethyl)-9H-carbazole (BIMC). Using time-dependent density functional theory (TDDFT), we successfully predicted the fluorescence emission wavelengths of BBP under various protonation states, achieving an average deviation of 6.0% from experimental excitation energies. Molecular dynamics simulations elucidated the microscopic mechanism underlying BBP’s fluorescence quenching under acidic conditions. The spectroscopic predictions for BIMC were performed using the STEOM-DLPNO-CCSD method, yielding an average deviation of merely 0.57% from experimental values. Based on Einstein’s spontaneous emission formula and empirical internal conversion rate formulas, we calculated fluorescence quantum yields for spectral intensity calibration, enabling the accurate prediction of experimental spectra. To streamline the computational workflow, we developed and open-sourced the EasySpecCalc software v0.0.1 on GitHub, aiming to facilitate the design and development of fluorescent probes. Full article
(This article belongs to the Special Issue Fluorescent Probes in Biomedical Detection and Imaging)
Show Figures

Figure 1

21 pages, 4087 KiB  
Article
Enhanced Bug Priority Prediction via Priority-Sensitive Long Short-Term Memory–Attention Mechanism
by Geunseok Yang, Jinfeng Ji and Jaehee Kim
Appl. Sci. 2025, 15(2), 633; https://doi.org/10.3390/app15020633 - 10 Jan 2025
Viewed by 319
Abstract
The rapid expansion of software applications has led to an increase in the frequency of bugs, which are typically reported through user-submitted bug reports. Developers prioritize these reports based on severity and project schedules. However, the manual process of assigning bug priorities is [...] Read more.
The rapid expansion of software applications has led to an increase in the frequency of bugs, which are typically reported through user-submitted bug reports. Developers prioritize these reports based on severity and project schedules. However, the manual process of assigning bug priorities is time-consuming and prone to inconsistencies. To address these limitations, this study presents a Priority-Sensitive LSTM–Attention mechanism for automating bug priority prediction. The proposed approach extracts features such as product and component details from bug repositories and preprocesses the data to ensure consistency. Priority-based feature selection is applied to align the input data with the task of bug prioritization. These features are processed through a Long Short-Term Memory (LSTM) network to capture sequential dependencies, and the outputs are further refined using an Attention mechanism to focus on the most relevant information for prediction. The effectiveness of the proposed model was evaluated using datasets from the Eclipse and Mozilla open-source projects. Compared to baseline models such as Naïve Bayes, Random Forest, Decision Tree, SVM, CNN, LSTM, and CNN-LSTM, the proposed model achieved a superior performance. It recorded an accuracy of 93.00% for Eclipse and 84.11% for Mozilla, representing improvements of 31.11% and 40.39%, respectively, over the baseline models. Statistical verification confirmed that these performance gains were significant. This study distinguishes itself by integrating priority-based feature selection with a hybrid LSTM–Attention architecture, which enhances prediction accuracy and robustness compared to existing methods. The results demonstrate the potential of this approach to streamline bug prioritization, improve project management efficiency, and assist developers in resolving high-priority issues. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 4283 KiB  
Article
GIS-Driven Methods for Scouting Sources of Waste Heat for Fifth-Generation District Heating and Cooling (5GDHC) Systems: Railway/Highway Tunnels
by Stanislav Chicherin
Processes 2025, 13(1), 165; https://doi.org/10.3390/pr13010165 - 9 Jan 2025
Viewed by 411
Abstract
This paper explores the innovative application of Geographic Information Systems (GISs) to identify and utilize waste heat sources from railway and highway tunnels for fifth-generation district heating and cooling (5GDHC) systems. Increasing the number of prosumers—entities that produce and consume energy—within 5GDHC networks [...] Read more.
This paper explores the innovative application of Geographic Information Systems (GISs) to identify and utilize waste heat sources from railway and highway tunnels for fifth-generation district heating and cooling (5GDHC) systems. Increasing the number of prosumers—entities that produce and consume energy—within 5GDHC networks enhances their efficiency and sustainability. While potential sources of waste heat vary widely, this study focuses on underground car/railway tunnels, which typically have a temperature range of 20 °C to 40 °C. Using GIS software, we comprehensively analyzed tunnel locations and their potential as heat sources in Belgium. This study incorporates data from various sources, including OpenStreetMap and the European Waste Heat Map, and applies a two-dimensional heat transfer model to estimate the heat recovery potential. The results indicate that railway tunnels, especially in the southern regions of Belgium, show significant promise for waste heat recovery, potentially contributing between 0.8 and 2.9 GWh annually. The integration of blockchain technology for peer-to-peer energy exchange within 5GDHC systems is also discussed, highlighting its potential to enhance energy management and billing. This research contributes to the growing body of knowledge on sustainable energy systems and presents a novel approach to leveraging existing district heating and cooling infrastructure. Full article
(This article belongs to the Special Issue Novel Recovery Technologies from Wastewater and Waste)
Show Figures

Figure 1

21 pages, 11620 KiB  
Article
Performance Evaluation and Optimization of 3D Gaussian Splatting in Indoor Scene Generation and Rendering
by Xinjian Fang, Yingdan Zhang, Hao Tan, Chao Liu and Xu Yang
ISPRS Int. J. Geo-Inf. 2025, 14(1), 21; https://doi.org/10.3390/ijgi14010021 - 7 Jan 2025
Viewed by 665
Abstract
This study addresses the prevalent challenges of inefficiency and suboptimal quality in indoor 3D scene generation and rendering by proposing a parameter-tuning strategy for 3D Gaussian Splatting (3DGS). Through a systematic quantitative analysis of various performance indicators under differing resolution conditions, threshold settings [...] Read more.
This study addresses the prevalent challenges of inefficiency and suboptimal quality in indoor 3D scene generation and rendering by proposing a parameter-tuning strategy for 3D Gaussian Splatting (3DGS). Through a systematic quantitative analysis of various performance indicators under differing resolution conditions, threshold settings for the average magnitude of spatial position gradients, and adjustments to the scaling learning rate, the optimal parameter configuration for the 3DGS model, specifically tailored for indoor modeling scenarios, is determined. Firstly, utilizing a self-collected dataset, a comprehensive comparison was conducted among COLLI-SION-MAPping (abbreviated as COLMAP (V3.7), an open-source software based on Structure from Motion and Multi-View Stereo (SFM-MVS)), Context Capture (V10.2) (abbreviated as CC, a software utilizing oblique photography algorithms), Neural Radiance Fields (NeRF), and the currently renowned 3DGS algorithm. The key dimensions of focus included the number of images, rendering time, and overall rendering effectiveness. Subsequently, based on this comparison, rigorous qualitative and quantitative evaluations are further conducted on the overall performance and detail processing capabilities of the 3DGS algorithm. Finally, to meet the specific requirements of indoor scene modeling and rendering, targeted parameter tuning is performed on the algorithm. The results demonstrate significant performance improvements in the optimized 3DGS algorithm: the PSNR metric increases by 4.3%, and the SSIM metric improves by 0.2%. The experimental results prove that the improved 3DGS algorithm exhibits superior expressive power and persuasiveness in indoor scene rendering. Full article
Show Figures

Figure 1

15 pages, 4219 KiB  
Article
Geometry Optimisation of a Wave Energy Converter
by Susana Costa, Jorge Ferreira and Nelson Martins
Energies 2025, 18(1), 207; https://doi.org/10.3390/en18010207 - 6 Jan 2025
Viewed by 386
Abstract
The geometry optimisation of a point-absorber wave energy converter, focusing on the increase in energy absorption derived from heave forces, was performed. The proposed procedure starts by developing an initial geometry, which is later evaluated in terms of hydrodynamics and optimised through an [...] Read more.
The geometry optimisation of a point-absorber wave energy converter, focusing on the increase in energy absorption derived from heave forces, was performed. The proposed procedure starts by developing an initial geometry, which is later evaluated in terms of hydrodynamics and optimised through an optimisation algorithm to tune the shape parameters that influence energy absorption, intending to obtain the optimal geometry. A deployment site on the Portuguese coast was defined to obtain information on the predominant waves to assess several sea states. NEMOH and WEC-Sim (both open-source software packages) were used to evaluate the interaction between the structure and the imposed wave conditions. The results extracted and analysed from this software included forces in the six degrees of freedom. Under extreme wave conditions, the highest increase in the relative capture width between the initial and final shapes was around 0.2, corresponding to an increase from 0.36 to 0.54, while under average wave conditions, the increase only reached a value of around 0.02, corresponding to an increase from 0.22 to 0.24, as calculated through the relative capture width values. Full article
(This article belongs to the Section A3: Wind, Wave and Tidal Energy)
Show Figures

Figure 1

25 pages, 4974 KiB  
Article
A Common Language of Software Evolution in Repositories (CLOSER)
by Jordan Garrity and David Cutting
Viewed by 354
Abstract
Version Control Systems (VCSs) are used by development teams to manage the collaborative evolution of source code, and there are several widely used industry standard VCSs. In addition to the code files themselves, metadata about the changes made are also recorded by the [...] Read more.
Version Control Systems (VCSs) are used by development teams to manage the collaborative evolution of source code, and there are several widely used industry standard VCSs. In addition to the code files themselves, metadata about the changes made are also recorded by the VCS, and this is often used with analytical tools to provide insight into the software development, a process known as Mining Software Repositories (MSRs). MSR tools are numerous but most often limited to one VCS format and, therefore, restricted in their scope of application in addition to the initial effort required to implement parsers for verbose textual VCS output. To address this limitation, a domain-specific language (DSL), the Common Language of Software Evolution in Repositories (CLOSER), was defined that abstracted away from specific implementations while isomorphically mapping to the data model of all major VCS formats. Using CLOSER directly as a data model or as an intermediate stage in a conversion analysis approach could make use of all major repositories rather than be limited to a single format. The initial barrier to adoption for MSR approaches was also lowered as CLOSER output is a concise, easily machine-readable format. CLOSER was implemented in tooling and tested against a number of common expected use cases, including a direct use in MSR analysis, proving the fidelity of the model and implementation. CLOSER was also successfully used to convert raw output logs from one VCS format to another, offering the possibility that legacy analysis tools could be used on other technologies without any changes being required. In addition to the advantages of a generic model opening all major VCS formats for analysis parsing, the CLOSER format was found to require less code and complete parsing faster than traditional VCS logging outputs. Full article
Show Figures

Figure 1

22 pages, 2528 KiB  
Systematic Review
AI Chatbots and Cognitive Control: Enhancing Executive Functions Through Chatbot Interactions: A Systematic Review
by Pantelis Pergantis, Victoria Bamicha, Charalampos Skianis and Athanasios Drigas
Viewed by 1479
Abstract
Background/Objectives: The evolution of digital technology enhances the broadening of a person’s intellectual growth. Research points out that implementing innovative applications of the digital world improves human social, cognitive, and metacognitive behavior. Artificial intelligence chatbots are yet another innovative human-made construct. These [...] Read more.
Background/Objectives: The evolution of digital technology enhances the broadening of a person’s intellectual growth. Research points out that implementing innovative applications of the digital world improves human social, cognitive, and metacognitive behavior. Artificial intelligence chatbots are yet another innovative human-made construct. These are forms of software that simulate human conversation, understand and process user input, and provide personalized responses. Executive function includes a set of higher mental processes necessary for formulating, planning, and achieving a goal. The present study aims to investigate executive function reinforcement through artificial intelligence chatbots, outlining potentials, limitations, and future research suggestions. Specifically, the study examined three research questions: the use of conversational chatbots in executive functioning training, their impact on executive-cognitive skills, and the duration of any improvements. Methods: The assessment of the existing literature was implemented using the systematic review method, according to the PRISMA 2020 Principles. The avalanche search method was employed to conduct a source search in the following databases: Scopus, Web of Science, PubMed, and complementary Google Scholar. This systematic review included studies from 2021 to the present using experimental, observational, or mixed methods. It included studies using AI-based chatbots or conversationalists to support executive functions, such as anxiety, stress, depression, memory, attention, cognitive load, and behavioral changes. In addition, this study included general populations with specific neurological conditions, all peer-reviewed, written in English, and with full-text access. However, the study excluded studies before 2021, the literature reviews, systematic reviews, non-AI-based chatbots or conversationalists, studies not targeting the range of executive skills and abilities, studies not written in English, and studies without open access. The criteria aligned with the study objectives, ensuring a focus on AI chatbots and the impact of conversational agents on executive function. The initial collection totaled n = 115 articles; however, the eligibility requirements led to the final selection of n = 10 studies. Results: The findings of the studies suggested positive effects of using AI chatbots to enhance and improve executive skills. Although, several limitations were identified, making it still difficult to generalize and reproduce their effects. Conclusions: AI chatbots are an innovative artificial intelligence tool that can function as a digital assistant for learning and expanding executive skills, contributing to the cognitive, metacognitive, and social development of the individual. However, its use in executive skills training is at a primary stage. The findings highlighted the need for a unified framework for reference and future studies, better study designs, diverse populations, larger sample sizes of participants, and longitudinal studies that observe the long-term effects of their use. Full article
(This article belongs to the Special Issue Effects of Cognitive Training on Executive Function and Cognition)
Show Figures

Figure 1

15 pages, 2807 KiB  
Article
Automatic Characterization of Prostate Suspect Lesions on T2-Weighted Image Acquisitions Using Texture Features and Machine-Learning Methods: A Pilot Study
by Teodora Telecan, Cosmin Caraiani, Bianca Boca, Roxana Sipos-Lascu, Laura Diosan, Zoltan Balint, Raluca Maria Hendea, Iulia Andras, Nicolae Crisan and Monica Lupsor-Platon
Viewed by 547
Abstract
Background: Prostate cancer (PCa) is the most frequent neoplasia in the male population. According to the International Society of Urological Pathology (ISUP), PCa can be divided into two major groups, based on their prognosis and treatment options. Multiparametric magnetic resonance imaging (mpMRI) [...] Read more.
Background: Prostate cancer (PCa) is the most frequent neoplasia in the male population. According to the International Society of Urological Pathology (ISUP), PCa can be divided into two major groups, based on their prognosis and treatment options. Multiparametric magnetic resonance imaging (mpMRI) holds a central role in PCa assessment; however, it does not have a one-to-one correspondence with the histopathological grading of tumors. Recently, artificial intelligence (AI)-based algorithms and textural analysis, a subdivision of radiomics, have shown potential in bridging this gap. Objectives: We aimed to develop a machine-learning algorithm that predicts the ISUP grade of manually contoured prostate nodules on T2-weighted images and classifies them into clinically significant and indolent ones. Materials and Methods: We included 55 patients with 76 lesions. All patients were examined on the same 1.5 Tesla mpMRI scanner. Each nodule was manually segmented using the open-source 3D Slicer platform, and textural features were extracted using the PyRadiomics (version 3.0.1) library. The software was based on machine-learning classifiers. The accuracy was calculated based on precision, recall, and F1 scores. Results: The median age of the study group was 64 years (IQR 61–68), and the mean PSA value was 11.14 ng/mL. A total of 85.52% of the nodules were graded PI-RADS 4 or higher. Overall, the algorithm classified indolent and clinically significant PCas with an accuracy of 87.2%. Further, when trained to differentiate each ISUP group, the accuracy was 80.3%. Conclusions: We developed an AI-based decision-support system that accurately differentiates between the two PCa prognostic groups using only T2 MRI acquisitions by employing radiomics with a robust machine-learning architecture. Full article
Show Figures

Figure 1

36 pages, 25347 KiB  
Article
Construction of a Real-Scene 3D Digital Campus Using a Multi-Source Data Fusion: A Case Study of Lanzhou Jiaotong University
by Rui Gao, Guanghui Yan, Yingzhi Wang, Tianfeng Yan, Ruiting Niu and Chunyang Tang
ISPRS Int. J. Geo-Inf. 2025, 14(1), 19; https://doi.org/10.3390/ijgi14010019 - 3 Jan 2025
Viewed by 725
Abstract
Real-scene 3D digital campuses are essential for improving the accuracy and effectiveness of spatial data representation, facilitating informed decision-making for university administrators, optimizing resource management, and enriching user engagement for students and faculty. However, current approaches to constructing these digital environments face several [...] Read more.
Real-scene 3D digital campuses are essential for improving the accuracy and effectiveness of spatial data representation, facilitating informed decision-making for university administrators, optimizing resource management, and enriching user engagement for students and faculty. However, current approaches to constructing these digital environments face several challenges. They often rely on costly commercial platforms, struggle with integrating heterogeneous datasets, and require complex workflows to achieve both high precision and comprehensive campus coverage. This paper addresses these issues by proposing a systematic multi-source data fusion approach that employs open-source technologies to generate a real-scene 3D digital campus. A case study of Lanzhou Jiaotong University is presented to demonstrate the feasibility of this approach. Firstly, oblique photography based on unmanned aerial vehicles (UAVs) is used to capture large-scale, high-resolution images of the campus area, which are then processed using open-source software to generate an initial 3D model. Afterward, a high-resolution model of the campus buildings is then created by integrating the UAV data, while 3D Digital Elevation Model (DEM) and OpenStreetMap (OSM) building data provide a 3D overview of the surrounding campus area, resulting in a comprehensive 3D model for a real-scene digital campus. Finally, the 3D model is visualized on the web using Cesium, which enables functionalities such as real-time data loading, perspective switching, and spatial data querying. Results indicate that the proposed approach can effectively get rid of reliance on expensive proprietary systems, while rapidly and accurately reconstructing a real-scene digital campus. This framework not only streamlines data harmonization but also offers an open-source, practical, cost-effective solution for real-scene 3D digital campus construction, promoting further research and applications in twin city, Virtual Reality (VR), and Geographic Information Systems (GIS). Full article
Show Figures

Figure 1

11 pages, 967 KiB  
Article
Visual Noise Mask for Human Point-Light Displays: A Coding-Free Approach
by Catarina Carvalho Senra, Adriana Conceição Soares Sampaio and Olivia Morgan Lapenta
Viewed by 359
Abstract
Human point-light displays consist of luminous dots representing human articulations, thus depicting actions without pictorial information. These stimuli are widely used in action recognition experiments. Because humans excel in decoding human motion, point-light displays (PLDs) are often masked with additional moving dots (noise [...] Read more.
Human point-light displays consist of luminous dots representing human articulations, thus depicting actions without pictorial information. These stimuli are widely used in action recognition experiments. Because humans excel in decoding human motion, point-light displays (PLDs) are often masked with additional moving dots (noise masks), thereby challenging stimulus recognition. These noise masks are typically found within proprietary programming software, entail file format restrictions, and demand extensive programming skills. To address these limitations, we present the first user-friendly step-by-step guide to develop visual noise to mask PLDs using free, open-source software that offers compatibility with various file formats, features a graphical interface, and facilitates the manipulation of both 2D and 3D videos. Further, to validate our approach, we tested two generated masks in a pilot experiment with 12 subjects and demonstrated that they effectively jeopardised human agent recognition and, therefore, action visibility. In sum, the main advantages of the presented methodology are its cost-effectiveness and ease of use, making it appealing to novices in programming. This advancement holds the potential to stimulate young researchers’ use of PLDs, fostering further exploration and understanding of human motion perception. Full article
Show Figures

Figure 1

Back to TopTop