Journal Description
Information
Information
is a scientific, peer-reviewed, open access journal of information science and technology, data, knowledge, and communication, and is published monthly online by MDPI. The International Society for Information Studies (IS4SI) is affiliated with Information and its members receive discounts on the article processing charges.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, and other databases.
- Journal Rank: CiteScore - Q2 (Information Systems)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 14.9 days after submission; acceptance to publication is undertaken in 2.9 days (median values for papers published in this journal in the first half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.4 (2023);
5-Year Impact Factor:
2.6 (2023)
Latest Articles
From Data to Diagnosis: Machine Learning Revolutionizes Epidemiological Predictions
Information 2024, 15(11), 719; https://doi.org/10.3390/info15110719 (registering DOI) - 8 Nov 2024
Abstract
The outbreak of epidemiological diseases creates a major impact on humanity as well as on the world’s economy. The consequence of such infectious diseases affects the survival of mankind. The government has to stand up to the negative influence of these epidemiological diseases
[...] Read more.
The outbreak of epidemiological diseases creates a major impact on humanity as well as on the world’s economy. The consequence of such infectious diseases affects the survival of mankind. The government has to stand up to the negative influence of these epidemiological diseases and facilitate society with medical resources and economical support. In recent times, COVID-19 has been one of the epidemiological diseases that created lethal effects and a greater slump in the economy. Therefore, the prediction of outbreaks is essential for epidemiological diseases. It may be either frequent or sudden infections in society. The unexpected raise in the application of prediction models in recent years is outstanding. A study on these epidemiological prediction models and their usage from the year 2018 onwards is highlighted in this article. The popularity of various prediction approaches is emphasized and summarized in this article.
Full article
(This article belongs to the Special Issue Health Data Information Retrieval)
►
Show Figures
Open AccessArticle
Lightweight Reference-Based Video Super-Resolution Using Deformable Convolution
by
Tomo Miyazaki, Zirui Guo and Shinichiro Omachi
Information 2024, 15(11), 718; https://doi.org/10.3390/info15110718 - 8 Nov 2024
Abstract
Super-resolution is a technique for generating a high-resolution image or video from a low-resolution counterpart by predicting natural and realistic texture information. It has various applications such as medical image analysis, surveillance, remote sensing, etc. However, traditional single-image super-resolution methods can lead to
[...] Read more.
Super-resolution is a technique for generating a high-resolution image or video from a low-resolution counterpart by predicting natural and realistic texture information. It has various applications such as medical image analysis, surveillance, remote sensing, etc. However, traditional single-image super-resolution methods can lead to a blurry visual effect. Reference-based super-resolution methods have been proposed to recover detailed information accurately. In reference-based methods, a high-resolution image is also used as a reference in addition to the low-resolution input image. Reference-based methods aim at transferring high-resolution textures from the reference image to produce visually pleasing results. However, it requires texture alignment between low-resolution and reference images, which generally requires a lot of time and memory. This paper proposes a lightweight reference-based video super-resolution method using deformable convolution. The proposed method makes the reference-based super-resolution a technology that can be easily used even in environments with limited computational resources. To verify the effectiveness of the proposed method, we conducted experiments to compare the proposed method with baseline methods in two aspects: runtime and memory usage, in addition to accuracy. The experimental results showed that the proposed method restored a high-quality super-resolved image from a very low-resolution level in 0.0138 s using two NVIDIA RTX 2080 GPUs, much faster than the representative method.
Full article
(This article belongs to the Special Issue Deep Learning for Image, Video and Signal Processing)
►▼
Show Figures
Figure 1
Open AccessArticle
Variational Color Shift and Auto-Encoder Based on Large Separable Kernel Attention for Enhanced Text CAPTCHA Vulnerability Assessment
by
Xing Wan, Juliana Johari and Fazlina Ahmat Ruslan
Information 2024, 15(11), 717; https://doi.org/10.3390/info15110717 - 7 Nov 2024
Abstract
Text CAPTCHAs are crucial security measures deployed on global websites to deter unauthorized intrusions. The presence of anti-attack features incorporated into text CAPTCHAs limits the effectiveness of evaluating them, despite CAPTCHA recognition being an effective method for assessing their security. This study introduces
[...] Read more.
Text CAPTCHAs are crucial security measures deployed on global websites to deter unauthorized intrusions. The presence of anti-attack features incorporated into text CAPTCHAs limits the effectiveness of evaluating them, despite CAPTCHA recognition being an effective method for assessing their security. This study introduces a novel color augmentation technique called Variational Color Shift (VCS) to boost the recognition accuracy of different networks. VCS generates a color shift of every input image and then resamples the image within that range to generate a new image, thus expanding the number of samples of the original dataset to improve training effectiveness. In contrast to Random Color Shift (RCS), which treats the color offsets as hyperparameters, VCS estimates color shifts by reparametrizing the points sampled from the uniform distribution using predicted offsets according to every image, which makes the color shifts learnable. To better balance the computation and performance, we also propose two variants of VCS: Sim-VCS and Dilated-VCS. In addition, to solve the overfitting problem caused by disturbances in text CAPTCHAs, we propose an Auto-Encoder (AE) based on Large Separable Kernel Attention (AE-LSKA) to replace the convolutional module with large kernels in the text CAPTCHA recognizer. This new module employs an AE to compress the interference while expanding the receptive field using Large Separable Kernel Attention (LSKA), reducing the impact of local interference on the model training and improving the overall perception of characters. The experimental results show that the recognition accuracy of the model after integrating the AE-LSKA module is improved by at least 15 percentage points on both M-CAPTCHA and P-CAPTCHA datasets. In addition, experimental results demonstrate that color augmentation using VCS is more effective in enhancing recognition, which has higher accuracy compared to RCS and PCA Color Shift (PCA-CS).
Full article
(This article belongs to the Special Issue Computer Vision for Security Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Enabling Parallel Performance and Portability of Solid Mechanics Simulations Across CPU and GPU Architectures
by
Nathaniel Morgan, Caleb Yenusah, Adrian Diaz, Daniel Dunning, Jacob Moore, Erin Heilman, Evan Lieberman, Steven Walton, Sarah Brown, Daniel Holladay, Russell Marki, Robert Robey and Marko Knezevic
Information 2024, 15(11), 716; https://doi.org/10.3390/info15110716 - 7 Nov 2024
Abstract
Efficiently simulating solid mechanics is vital across various engineering applications. As constitutive models grow more complex and simulations scale up in size, harnessing the capabilities of modern computer architectures has become essential for achieving timely results. This paper presents advancements in running parallel
[...] Read more.
Efficiently simulating solid mechanics is vital across various engineering applications. As constitutive models grow more complex and simulations scale up in size, harnessing the capabilities of modern computer architectures has become essential for achieving timely results. This paper presents advancements in running parallel simulations of solid mechanics on multi-core CPUs and GPUs using a single-code implementation. This portability is made possible by the C++ matrix and array (MATAR) library, which interfaces with the C++ Kokkos library, enabling the selection of fine-grained parallelism backends (e.g., CUDA, HIP, OpenMP, pthreads, etc.) at compile time. MATAR simplifies the transition from Fortran to C++ and Kokkos, making it easier to modernize legacy solid mechanics codes. We applied this approach to modernize a suite of constitutive models and to demonstrate substantial performance improvements across different computer architectures. This paper includes comparative performance studies using multi-core CPUs along with AMD and NVIDIA GPUs. Results are presented using a hypoelastic–plastic model, a crystal plasticity model, and the viscoplastic self-consistent generalized material model (VPSC-GMM). The results underscore the potential of using the MATAR library and modern computer architectures to accelerate solid mechanics simulations.
Full article
(This article belongs to the Special Issue Advances in High Performance Computing and Scalable Software)
►▼
Show Figures
Figure 1
Open AccessArticle
Two-Stage Combined Model for Short-Term Electricity Forecasting in Ports
by
Wentao Song, Xiaohua Cao, Hanrui Jiang, Zejun Li and Ruobin Gao
Information 2024, 15(11), 715; https://doi.org/10.3390/info15110715 - 7 Nov 2024
Abstract
With an increasing emphasis on energy conservation, emission reduction, and power consumption management, port enterprises are focusing on enhancing their electricity load forecasting capabilities. Accurate electricity load forecasting is crucial for understanding power usage and optimizing energy allocation. This study introduces a novel
[...] Read more.
With an increasing emphasis on energy conservation, emission reduction, and power consumption management, port enterprises are focusing on enhancing their electricity load forecasting capabilities. Accurate electricity load forecasting is crucial for understanding power usage and optimizing energy allocation. This study introduces a novel approach that transcends the limitations of single prediction models by employing a Binary Fusion Weight Determination Method (BFWDM) to optimize and integrate three distinct prediction models: Temporal Pattern Attention Long Short-Term Memory (TPA-LSTM), Multi-Quantile Recurrent Neural Network (MQ-RNN), and Deep Factors. We propose a two-phase process for constructing an optimal combined forecasting model for port power load prediction. In the initial phase, individual prediction models generate preliminary outcomes. In the subsequent phase, these preliminary predictions are used to construct a combination forecasting model based on the BFWDM. The efficacy of the proposed model is validated using two actual port data, demonstrating high prediction accuracy with a Mean Absolute Percentage Error (MAPE) of only 6.23% and 7.94%. This approach not only enhances the prediction accuracy but also improves the adaptability and stability of the model compared to other existing models.
Full article
(This article belongs to the Section Information Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Maternal Nutritional Factors Enhance Birthweight Prediction: A Super Learner Ensemble Approach
by
Muhammad Mursil, Hatem A. Rashwan, Pere Cavallé-Busquets, Luis A. Santos-Calderón, Michelle M. Murphy and Domenec Puig
Information 2024, 15(11), 714; https://doi.org/10.3390/info15110714 - 6 Nov 2024
Abstract
Birthweight (BW) is a widely used indicator of neonatal health, with low birthweight (LBW) being linked to higher risks of morbidity and mortality. Timely and precise prediction of LBW is crucial for ensuring newborn health and well-being. Despite recent machine learning advancements in
[...] Read more.
Birthweight (BW) is a widely used indicator of neonatal health, with low birthweight (LBW) being linked to higher risks of morbidity and mortality. Timely and precise prediction of LBW is crucial for ensuring newborn health and well-being. Despite recent machine learning advancements in BW classification based on physiological traits in the mother and ultrasound outcomes, maternal status in essential micronutrients for fetal development is yet to be fully exploited for BW prediction. This study aims to evaluate the impact of maternal nutritional factors, specifically mid-pregnancy plasma concentrations of vitamin B12, folate, and anemia on BW prediction. This study analyzed data from 729 pregnant women in Tarragona, Spain, for early BW prediction and analyzed each factor’s impact and contribution using a partial dependency plot and feature importance. Using a super learner ensemble method with tenfold cross-validation, the model achieved a prediction accuracy of 96.19% and an AUC-ROC of 0.96, outperforming single-model approaches. Vitamin B12 and folate status were identified as significant predictors, underscoring their importance in reducing LBW risk. The findings highlight the critical role of maternal nutritional factors in BW prediction and suggest that monitoring vitamin B12 and folate levels during pregnancy could enhance prenatal care and mitigate neonatal complications associated with LBW.
Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
►▼
Show Figures
Graphical abstract
Open AccessEditorial
Best IDEAS: Special Issue of the International Database Engineered Applications Symposium
by
Peter Z. Revesz
Information 2024, 15(11), 713; https://doi.org/10.3390/info15110713 - 6 Nov 2024
Abstract
Database engineered applications cover a broad range of topics including various design and maintenance methods, as well as data analytics and data mining algorithms and learning strategies for enterprise, distributed, or federated data stores [...]
Full article
(This article belongs to the Special Issue International Database Engineered Applications)
Open AccessArticle
Exploring the Features and Trends of Industrial Product E-Commerce in China Using Text-Mining Approaches
by
Zhaoyang Sun, Qi Zong, Yuxin Mao and Gongxing Wu
Information 2024, 15(11), 712; https://doi.org/10.3390/info15110712 - 6 Nov 2024
Abstract
►▼
Show Figures
Industrial product e-commerce refers to the specific application of the e-commerce concept in industrial product transactions. It enables industrial enterprises to conduct transactions via Internet platforms and reduce circulation and operating costs. Industrial literature, such as policies, reports, and standards related to industrial
[...] Read more.
Industrial product e-commerce refers to the specific application of the e-commerce concept in industrial product transactions. It enables industrial enterprises to conduct transactions via Internet platforms and reduce circulation and operating costs. Industrial literature, such as policies, reports, and standards related to industrial product e-commerce, contains much crucial information. Through a systematical analysis of this information, we can explore and comprehend the development characteristics and trends of industrial product e-commerce. To this end, 18 policy documents, 10 industrial reports, and five standards are analyzed by employing text-mining methods. Firstly, natural language processing (NLP) technology is utilized to pre-process the text data related to industrial product commerce. Then, word frequency statistics and TF-IDF keyword extraction are performed, and the word frequency statistics are visually represented. Subsequently, the feature set is obtained by combining these processes with the manual screening method. The original text corpus is used as the training set by employing the skip-gram model in Word2Vec, and the feature words are transformed into word vectors in the multi-dimensional space. The K-means algorithm is used to cluster the feature words into groups. The latent Dirichlet allocation (LDA) method is then utilized to further group and discover the features. The text-mining results provide evidence for the development characteristics and trends of industrial product e-commerce in China.
Full article
Graphical abstract
Open AccessArticle
Discrete Fourier Transform in Unmasking Deepfake Images: A Comparative Study of StyleGAN Creations
by
Vito Nicola Convertini, Donato Impedovo, Ugo Lopez, Giuseppe Pirlo and Gioacchino Sterlicchio
Information 2024, 15(11), 711; https://doi.org/10.3390/info15110711 - 6 Nov 2024
Abstract
This study proposes a novel forgery detection method based on the analysis of frequency components of images using the Discrete Fourier Transform (DFT). In recent years, face manipulation technologies, particularly Generative Adversarial Networks (GANs), have advanced to such an extent that their misuse,
[...] Read more.
This study proposes a novel forgery detection method based on the analysis of frequency components of images using the Discrete Fourier Transform (DFT). In recent years, face manipulation technologies, particularly Generative Adversarial Networks (GANs), have advanced to such an extent that their misuse, such as creating deepfakes indistinguishable to human observers, has become a significant societal concern. We reviewed two GAN architectures, StyleGAN and StyleGAN2, generating synthetic faces that were compared with real faces from the FFHQ and CelebA-HQ datasets. The key results demonstrate classification accuracies above 99%, with F1 scores of 99.94% for Support Vector Machines and 97.21% for Random Forest classifiers. These findings underline the fact that performing frequency analysis presents a superior approach to deepfake detection compared to traditional spatial detection methods. It provides insight into subtle manipulation cues in digital images and offers a scalable way to enhance security protocols amid rising digital impersonation threats.
Full article
(This article belongs to the Special Issue Computer Vision, Pattern Recognition and Machine Learning in Italy)
►▼
Show Figures
Graphical abstract
Open AccessReview
Cybersecurity at Sea: A Literature Review of Cyber-Attack Impacts and Defenses in Maritime Supply Chains
by
Maria Valentina Clavijo Mesa, Carmen Elena Patino-Rodriguez and Fernando Jesus Guevara Carazas
Information 2024, 15(11), 710; https://doi.org/10.3390/info15110710 - 6 Nov 2024
Abstract
►▼
Show Figures
The maritime industry is constantly evolving and posing new challenges, especially with increasing digitalization, which has raised concerns about cyber-attacks on maritime supply chain agents. Although scholars have proposed various methods and classification models to counter these cyber threats, a comprehensive cyber-attack taxonomy
[...] Read more.
The maritime industry is constantly evolving and posing new challenges, especially with increasing digitalization, which has raised concerns about cyber-attacks on maritime supply chain agents. Although scholars have proposed various methods and classification models to counter these cyber threats, a comprehensive cyber-attack taxonomy for maritime supply chain actors based on a systematic literature review is still lacking. This review aims to provide a clear picture of common cyber-attacks and develop a taxonomy for their categorization. In addition, it outlines best practices derived from academic research in maritime cybersecurity using PRISMA principles for a systematic literature review, which identified 110 relevant journal papers. This study highlights that distributed denial of service (DDoS) attacks and malware are top concerns for all maritime supply chain stakeholders. In particular, shipping companies are urged to prioritize defenses against hijacking, spoofing, and jamming. The report identifies 18 practices to combat cyber-attacks, categorized into information security management solutions, information security policies, and cybersecurity awareness and training. Finally, this paper explores how emerging technologies can address cyber-attacks in the maritime supply chain network (MSCN). While Industry 4.0 technologies are highlighted as significant trends in the literature, this study aims to equip MSCN stakeholders with the knowledge to effectively leverage a broader range of emerging technologies. In doing so, it provides forward-looking solutions to prevent and mitigate cyber-attacks, emphasizing that Industry 4.0 is part of a larger landscape of technological innovation.
Full article
Figure 1
Open AccessArticle
Pressure and Temperature Prediction of Oil Pipeline Networks Based on a Mechanism-Data Hybrid Driven Method
by
Faming Gong, Xingfang Zhao, Chengze Du, Kaiwen Zheng, Zhuang Shi and Hao Wang
Information 2024, 15(11), 709; https://doi.org/10.3390/info15110709 - 5 Nov 2024
Abstract
To ensure the operational safety of oil transportation stations, it is crucial to predict the impact of pressure and temperature before crude oil enters the pipeline network. Accurate predictions enable the assessment of the pipeline’s load-bearing capacity and the prevention of potential safety
[...] Read more.
To ensure the operational safety of oil transportation stations, it is crucial to predict the impact of pressure and temperature before crude oil enters the pipeline network. Accurate predictions enable the assessment of the pipeline’s load-bearing capacity and the prevention of potential safety incidents. Most existing studies primarily focus on describing and modeling the mechanisms of the oil flow process. However, monitoring data can be skewed by factors such as instrument aging and pipeline friction, leading to inaccurate predictions when relying solely on mechanistic or data-driven approaches. To address these limitations, this paper proposes a Temporal-Spatial Three-stream Temporal Convolutional Network (TS-TTCN) model that integrates mechanistic knowledge with data-driven methods. Building upon Temporal Convolutional Networks (TCN), the TS-TTCN model synthesizes mechanistic insights into the oil transport process to establish a hybrid driving mechanism. In the temporal dimension, it incorporates real-time operating parameters and applies temporal convolution techniques to capture the time-series characteristics of the oil transportation pipeline network. In the spatial dimension, it constructs a directed topological map based on the pipeline network’s node structure to characterize spatial features. Data analysis and experimental results show that the Three-stream Temporal Convolutional Network (TTCN) model, which uses a Tanh activation function, achieves an error rate below 5%. By analyzing and validating real-time data from the Dongying oil transportation station, the proposed hybrid model proves to be more stable, reliable, and accurate under varying operating conditions.
Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Information Systems)
►▼
Show Figures
Figure 1
Open AccessArticle
A Hybrid Semantic Representation Method Based on Fusion Conceptual Knowledge and Weighted Word Embeddings for English Texts
by
Zan Qiu, Guimin Huang, Xingguo Qin, Yabing Wang, Jiahao Wang and Ya Zhou
Information 2024, 15(11), 708; https://doi.org/10.3390/info15110708 - 5 Nov 2024
Abstract
►▼
Show Figures
The accuracy of traditional topic models may be compromised due to the sparsity of co-occurring vocabulary in the corpus, whereas conventional word embedding models tend to excessively prioritize contextual semantic information and inadequately capture domain-specific features in the text. This paper proposes a
[...] Read more.
The accuracy of traditional topic models may be compromised due to the sparsity of co-occurring vocabulary in the corpus, whereas conventional word embedding models tend to excessively prioritize contextual semantic information and inadequately capture domain-specific features in the text. This paper proposes a hybrid semantic representation method that combines a topic model that integrates conceptual knowledge with a weighted word embedding model. Specifically, we construct a topic model incorporating the Probase concept knowledge base to perform topic clustering and obtain topic semantic representation. Additionally, we design a weighted word embedding model to enhance the contextual semantic information representation of the text. The feature-based information fusion model is employed to integrate the two textual representations and generate a hybrid semantic representation. The hybrid semantic representation model proposed in this study was evaluated based on various English composition test sets. The findings demonstrate that the model presented in this paper exhibits superior accuracy and practical value compared to existing text representation methods.
Full article
Graphical abstract
Open AccessArticle
Is the Taiwan Stock Market (Swarm) Intelligent?
by
Ren-Raw Chen
Information 2024, 15(11), 707; https://doi.org/10.3390/info15110707 - 5 Nov 2024
Abstract
It is well-believed that most trading activities tend to herd. Herding is an important topic in finance. It implies a violation of efficient markets and hence, suggests possibly predictable trading profits. However, it is hard to test such a hypothesis using aggregated data
[...] Read more.
It is well-believed that most trading activities tend to herd. Herding is an important topic in finance. It implies a violation of efficient markets and hence, suggests possibly predictable trading profits. However, it is hard to test such a hypothesis using aggregated data (as in the literature). In this paper, we obtain a proprietary data set that contains detailed trading information, and as a result, for the first time it allows us to validate this hypothesis. The data set contains all trades transacted in 2019 by all the brokers/dealers across all locations in Taiwan of all the equities (stocks, warrants, and ETFs). Given such data, in this paper, we use swarm intelligence to identify such herding behavior. In particular, we use two versions of swarm intelligence—Boids and PSO (particle swarm optimization)—to study the herding behavior. Our results indicate weak swarm among brokers/dealers.
Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) for Economics and Business Management)
►▼
Show Figures
Figure 1
Open AccessArticle
Uncovering Key Factors That Drive the Impressions of Online Emerging Technology Narratives
by
Lowri Williams, Eirini Anthi and Pete Burnap
Information 2024, 15(11), 706; https://doi.org/10.3390/info15110706 - 5 Nov 2024
Abstract
Social media platforms play a significant role in facilitating business decision making, especially in the context of emerging technologies. Such platforms offer a rich source of data from a global audience, which can provide organisations with insights into market trends, consumer behaviour, and
[...] Read more.
Social media platforms play a significant role in facilitating business decision making, especially in the context of emerging technologies. Such platforms offer a rich source of data from a global audience, which can provide organisations with insights into market trends, consumer behaviour, and attitudes towards specific technologies, as well as monitoring competitor activity. In the context of social media, such insights are conceptualised as immediate and real-time behavioural responses measured by likes, comments, and shares. To monitor such metrics, social media platforms have introduced tools that allow users to analyse and track the performance of their posts and understand their audience. However, the existing tools often overlook the impact of contextual features such as sentiment, URL inclusion, and specific word use. This paper presents a data-driven framework to identify and quantify the influence of such features on the visibility and impact of technology-related tweets. The quantitative analysis from statistical modelling reveals that certain content-based features, like the number of words and pronouns used, positively correlate with the impressions of tweets, with increases of up to 2.8%. Conversely, features such as the excessive use of hashtags, verbs, and complex sentences were found to decrease impressions significantly, with a notable reduction of 8.6% associated with tweets containing numerous trailing characters. Moreover, the study shows that tweets expressing negative sentiments tend to be more impressionable, likely due to a negativity bias that elicits stronger emotional responses and drives higher engagement and virality. Additionally, the sentiment associated with specific technologies also played a crucial role; positive sentiments linked to beneficial technologies like data science or machine learning significantly boosted impressions, while similar sentiments towards negatively viewed technologies like cyber threats reduced them. The inclusion of URLs in tweets also had a mixed impact on impressions—enhancing engagement for general technology topics, but reducing it for sensitive subjects due to potential concerns over link safety. These findings underscore the importance of a strategic approach to social media content creation, emphasising the need for businesses to align their communication strategies, such as responding to shifts in user behaviours, new demands, and emerging uncertainties, with dynamic user engagement patterns.
Full article
(This article belongs to the Section Information Processes)
►▼
Show Figures
Graphical abstract
Open AccessArticle
Exploring Sentiment Analysis for the Indonesian Presidential Election Through Online Reviews Using Multi-Label Classification with a Deep Learning Algorithm
by
Ahmad Nahid Ma’aly, Dita Pramesti, Ariadani Dwi Fathurahman and Hanif Fakhrurroja
Information 2024, 15(11), 705; https://doi.org/10.3390/info15110705 - 5 Nov 2024
Abstract
Presidential elections are an important political event that often trigger intense debate. With more than 139 million users, YouTube serves as a significant platform for understanding public opinion through sentiment analysis. This study aimed to implement deep learning techniques for a multi-label sentiment
[...] Read more.
Presidential elections are an important political event that often trigger intense debate. With more than 139 million users, YouTube serves as a significant platform for understanding public opinion through sentiment analysis. This study aimed to implement deep learning techniques for a multi-label sentiment analysis of comments on YouTube videos related to the 2024 Indonesian presidential election. Offering a fresh perspective compared to previous research that primarily employed traditional classification methods, this study classifies comments into eight emotional labels: anger, anticipation, disgust, joy, fear, sadness, surprise, and trust. By focusing on the emotional spectrum, this study provides a more nuanced understanding of public sentiment towards presidential candidates. The CRISP-DM method is applied, encompassing stages of business understanding, data understanding, data preparation, modeling, evaluation, and deployment, ensuring a systematic and comprehensive approach. This study employs a dataset comprising 32,000 comments, obtained via YouTube Data API, from the KPU and Najwa Shihab channels. The analysis is specifically centered on comments related to presidential candidate debates. Three deep learning models—Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (Bi-LSTM), and a hybrid model combining CNN and Bi-LSTM—are assessed using confusion matrix, Area Under the Curve (AUC), and Hamming loss metrics. The evaluation results demonstrate that the Bi-LSTM model achieved the highest accuracy with an AUC value of 0.91 and a Hamming loss of 0.08, indicating an excellent ability to classify sentiment with high precision and a low error rate. This innovative approach to multi-label sentiment analysis in the context of the 2024 Indonesian presidential election expands the insights into public sentiment towards candidates, offering valuable implications for political campaign strategies. Additionally, this research contributes to the fields of natural language processing and data mining by addressing the challenges associated with multi-label sentiment analysis.
Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
►▼
Show Figures
Figure 1
Open AccessArticle
Unsupervised Decision Trees for Axis Unimodal Clustering
by
Paraskevi Chasani and Aristidis Likas
Information 2024, 15(11), 704; https://doi.org/10.3390/info15110704 - 5 Nov 2024
Abstract
The use of decision trees for obtaining and representing clustering solutions is advantageous, due to their interpretability property. We propose a method called Decision Trees for Axis Unimodal Clustering (DTAUC), which constructs unsupervised binary decision trees for clustering by exploiting the concept of
[...] Read more.
The use of decision trees for obtaining and representing clustering solutions is advantageous, due to their interpretability property. We propose a method called Decision Trees for Axis Unimodal Clustering (DTAUC), which constructs unsupervised binary decision trees for clustering by exploiting the concept of unimodality. Unimodality is a key property indicating the grouping behavior of data around a single density mode. Our approach is based on the notion of an axis unimodal cluster: a cluster where all features are unimodal, i.e., the set of values of each feature is unimodal as decided by a unimodality test. The proposed method follows the typical top-down splitting paradigm for building axis-aligned decision trees and aims to partition the initial dataset into axis unimodal clusters by applying thresholding on multimodal features. To determine the decision rule at each node, we propose a criterion that combines unimodality and separation. The method automatically terminates when all clusters are axis unimodal. Unlike typical decision tree methods, DTAUC does not require user-defined hyperparameters, such as maximum tree depth or the minimum number of points per leaf, except for the significance level of the unimodality test. Comparative experimental results on various synthetic and real datasets indicate the effectiveness of our method.
Full article
(This article belongs to the Special Issue Advanced Methods for Multi-Source Information Management, Modeling, and Analysis)
►▼
Show Figures
Figure 1
Open AccessReview
Exploring Perspectives of Blockchain Technology and Traditional Centralized Technology in Organ Donation Management: A Comprehensive Review
by
Geet Bawa, Harmeet Singh, Sita Rani, Aman Kataria and Hong Min
Information 2024, 15(11), 703; https://doi.org/10.3390/info15110703 - 4 Nov 2024
Abstract
►▼
Show Figures
Background/Objectives: The healthcare sector is rapidly growing, aiming to promote health, provide treatment, and enhance well-being. This paper focuses on the organ donation and transplantation system, a vital aspect of healthcare. It offers a comprehensive review of challenges in global organ donation
[...] Read more.
Background/Objectives: The healthcare sector is rapidly growing, aiming to promote health, provide treatment, and enhance well-being. This paper focuses on the organ donation and transplantation system, a vital aspect of healthcare. It offers a comprehensive review of challenges in global organ donation and transplantation, highlighting issues of fairness and transparency, and compares centralized architecture-based models and blockchain-based decentralized models. Methods: This work reviews 370 publications from 2016 to 2023 on organ donation management systems. Out of these, 85 publications met the inclusion criteria, including 67 journal articles, 2 doctoral theses, and 16 conference papers. About 50.6% of these publications focus on global challenges in the system. Additionally, 12.9% of the publications examine centralized architecture-based models, and 36.5% of the publications explore blockchain-based decentralized models. Results: Concerns about organ trafficking, illicit trade, system distrust, and unethical allocation are highlighted, with a lack of transparency as the primary catalyst in organ donation and transplantation. It has been observed that centralized architecture-based models use technologies such as Python, Java, SQL, and Android Technology but face data storage issues. In contrast, blockchain-based decentralized models, mainly using Ethereum and a subset on Hyperledger Fabric, benefit from decentralized data storage, ensure transparency, and address these concerns efficiently. Conclusions: It has been observed that blockchain technology-based models are the better option for organ donation management systems. Further, suggestions for future directions for researchers in the field of organ donation management systems have been presented.
Full article
Graphical abstract
Open AccessArticle
Enhancing Real-Time Cursor Control with Motor Imagery and Deep Neural Networks for Brain–Computer Interfaces
by
Srinath Akuthota, Ravi Chander Janapati, K. Raj Kumar, Vassilis C. Gerogiannis, Andreas Kanavos, Biswaranjan Acharya, Foteini Grivokostopoulou and Usha Desai
Information 2024, 15(11), 702; https://doi.org/10.3390/info15110702 - 4 Nov 2024
Abstract
This paper advances real-time cursor control for individuals with motor impairments through a novel brain–computer interface (BCI) system based solely on motor imagery. We introduce an enhanced deep neural network (DNN) classifier integrated with a Four-Class Iterative Filtering (FCIF) technique for efficient preprocessing
[...] Read more.
This paper advances real-time cursor control for individuals with motor impairments through a novel brain–computer interface (BCI) system based solely on motor imagery. We introduce an enhanced deep neural network (DNN) classifier integrated with a Four-Class Iterative Filtering (FCIF) technique for efficient preprocessing of neural signals. The underlying approach is the Four-Class Filter Bank Common Spatial Pattern (FCFBCSP) and it utilizes a customized filter bank for robust feature extraction, thereby significantly improving signal quality and cursor control responsiveness. Extensive testing under varied conditions demonstrates that our system achieves an average classification accuracy of 89.1% and response times of 663 milliseconds, illustrating high precision in feature discrimination. Evaluations using metrics such as Recall, Precision, and F1-Score confirm the system’s effectiveness and accuracy in practical applications, making it a valuable tool for enhancing accessibility for individuals with motor disabilities.
Full article
(This article belongs to the Special Issue Intelligent Information Processing for Sensors and IoT Communications)
►▼
Show Figures
Graphical abstract
Open AccessArticle
An Efficient Deep Learning Framework for Optimized Event Forecasting
by
Emad Ul Haq Qazi, Muhammad Hamza Faheem, Tanveer Zia, Muhammad Imran and Iftikhar Ahmad
Information 2024, 15(11), 701; https://doi.org/10.3390/info15110701 - 4 Nov 2024
Abstract
There have been several catastrophic events that have impacted multiple economies and resulted in thousands of fatalities, and violence has generated a severe political and financial crisis. Multiple studies have been centered around the artificial intelligence (AI) and machine learning (ML) approaches that
[...] Read more.
There have been several catastrophic events that have impacted multiple economies and resulted in thousands of fatalities, and violence has generated a severe political and financial crisis. Multiple studies have been centered around the artificial intelligence (AI) and machine learning (ML) approaches that are most widely used in practice to detect or forecast violent activities. However, machine learning algorithms become less accurate in identifying and forecasting violent activity as data volume and complexity increase. For the prediction of future events, we propose a hybrid deep learning (DL)-based model that is composed of a convolutional neural network (CNN), long short-term memory (LSTM), and an attention layer to learn temporal features from the benchmark the Global Terrorism Database (GTD). The GTD is an internationally recognized database that includes around 190,000 violent events and occurrences worldwide from 1970 to 2020. We took into account two factors for this experimental work: the type of event and the type of object used. The LSTM model takes these complex feature extractions from the CNN first to determine the chronological link between data points, whereas the attention model is used for the time series prediction of an event. The results show that the proposed model achieved good accuracies for both cases—type of event and type of object—compared to benchmark studies using the same dataset (98.1% and 97.6%, respectively).
Full article
(This article belongs to the Special Issue Advancements in Healthcare Data Science: Innovations, Challenges and Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
AI Impact on Hotel Guest Satisfaction via Tailor-Made Services: A Case Study of Serbia and Hungary
by
Ranko Makivić, Dragan Vukolić, Sonja Veljović, Minja Bolesnikov, Lóránt Dénes Dávid, Andrea Ivanišević, Mario Silić and Tamara Gajić
Information 2024, 15(11), 700; https://doi.org/10.3390/info15110700 - 4 Nov 2024
Abstract
This study examines the level of implementation of artificial intelligence (AI) in the personalization of hotel services and its impact on guest satisfaction through an analysis of tourists’ attitudes and behaviors The focus of the research is on how personalized recommendations for food
[...] Read more.
This study examines the level of implementation of artificial intelligence (AI) in the personalization of hotel services and its impact on guest satisfaction through an analysis of tourists’ attitudes and behaviors The focus of the research is on how personalized recommendations for food and beverages, activities, and room services, delivered by trustworthy AI systems, digital experience, and the perception of privacy and data security, influence overall guest satisfaction. The research was conducted in Serbia and Hungary, using structural models to assess and analyze direct and indirect effects. The results show that AI personalization significantly contributes to guest satisfaction, with mediating variables such as trust in AI systems and technological experience playing a key role. A comparative analysis highlights differences between Hungary, a member of the European Union, and Serbia, a country in transition, shedding light on specific regulatory frameworks and cultural preferences in these countries.
Full article
(This article belongs to the Special Issue Artificial Intelligence Methods for Human-Computer Interaction)
►▼
Show Figures
Figure 1
Journal Menu
► ▼ Journal Menu-
- Information Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Algorithms, Data, Information, Mathematics, Symmetry
Decision-Making and Data Mining for Sustainable Computing
Topic Editors: Sunil Jha, Malgorzata Rataj, Xiaorui ZhangDeadline: 30 November 2024
Topic in
Algorithms, Axioms, Information, Mathematics, Symmetry
Fuzzy Number, Fuzzy Difference, Fuzzy Differential: Theory and Applications
Topic Editors: Changyou Wang, Dong Qiu, Yonghong ShenDeadline: 20 December 2024
Topic in
BDCC, Digital, Information, Mathematics, Systems
Data-Driven Group Decision-Making
Topic Editors: Shaojian Qu, Ying Ji, M. Faisal NadeemDeadline: 31 December 2024
Topic in
Future Internet, Information, J. Imaging, Mathematics, Symmetry
Research on Deep Neural Networks for Video Motion Recognition
Topic Editors: Hamad Naeem, Hong Su, Amjad Alsirhani, Muhammad Shoaib BhuttaDeadline: 31 January 2025
Conferences
Special Issues
Special Issue in
Information
Artificial Intelligence and Games Science in Education
Guest Editors: Petros Lameras, Sylvester Arnab, Panagiotis PetridisDeadline: 15 November 2024
Special Issue in
Information
Knowledge Representation and Ontology-Based Data Management
Guest Editors: Domenico Savo, Gianluca Cima, Riccardo RosatiDeadline: 15 November 2024
Special Issue in
Information
Technology, Learning and Teaching of Electronics with Information Applications
Guest Editors: Raúl Igual, Inmaculada PlazaDeadline: 15 November 2024
Special Issue in
Information
Visual Information Processing in Computer Game
Guest Editor: Jong-Seung ParkDeadline: 15 November 2024
Topical Collections
Topical Collection in
Information
Knowledge Graphs for Search and Recommendation
Collection Editors: Pierpaolo Basile, Annalina Caputo
Topical Collection in
Information
Augmented Reality Technologies, Systems and Applications
Collection Editors: Ramon Fabregat, Jorge Bacca-Acosta, N.D. Duque-Mendez
Topical Collection in
Information
Natural Language Processing and Applications: Challenges and Perspectives
Collection Editor: Diego Reforgiato Recupero