Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (15)

Search Parameters:
Keywords = in-the-wild studies

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 538 KiB  
Article
Call to Action: Investigating Interaction Delay in Smartphone Notifications
by Michael Stach, Lena Mulansky, Manfred Reichert, Rüdiger Pryss and Felix Beierle
Sensors 2024, 24(8), 2612; https://doi.org/10.3390/s24082612 - 19 Apr 2024
Viewed by 1407
Abstract
Notifications are an essential part of the user experience on smart mobile devices. While some apps have to notify users immediately after an event occurs, others can schedule notifications strategically to notify them only on opportune moments. This tailoring allows apps to shorten [...] Read more.
Notifications are an essential part of the user experience on smart mobile devices. While some apps have to notify users immediately after an event occurs, others can schedule notifications strategically to notify them only on opportune moments. This tailoring allows apps to shorten the users’ interaction delay. In this paper, we present the results of a comprehensive study that identified the factors that influence users’ interaction delay to their smartphone notifications. We analyzed almost 10 million notifications collected in-the-wild from 922 users and computed their response times with regard to their demographics, their Big Five personality trait scores and the device’s charging state. Depending on the app category, the following tendencies can be identified over the course of the day: Most notifications were logged in late morning and late afternoon. This number decreases in the evening, between 8 p.m. and 11 p.m., and at the same time exhibits the lowest average interaction delays at daytime. We also found that the user’s sex and age is significantly associated with the response time. Based on the results of our study, we encourage developers to incorporate more information on the user and the executing device in their notification strategy to notify users more effectively. Full article
(This article belongs to the Special Issue Intelligent Sensors for Healthcare and Patient Monitoring)
Show Figures

Figure 1

20 pages, 629 KiB  
Article
Lessons in Developing a Behavioral Coding Protocol to Analyze In-the-Wild Child–Robot Interaction Events and Experiments
by Xela Indurkhya and Gentiane Venture
Electronics 2024, 13(7), 1175; https://doi.org/10.3390/electronics13071175 - 22 Mar 2024
Cited by 1 | Viewed by 1359
Abstract
Behavioral analyses of in-the-wild HRI studies generally rely on interviews or visual information from videos. This can be very limiting in settings where video recordings are not allowed or limited. We designed and tested a vocalization-based protocol to analyze in-the-wild child–robot interactions based [...] Read more.
Behavioral analyses of in-the-wild HRI studies generally rely on interviews or visual information from videos. This can be very limiting in settings where video recordings are not allowed or limited. We designed and tested a vocalization-based protocol to analyze in-the-wild child–robot interactions based upon a behavioral coding scheme utilized in wildlife biology, specifically in studies of wild dolphin populations. The audio of a video or audio recording is converted into a transcript, which is then analyzed using a behavioral coding protocol consisting of 5–6 categories (one indicating non-robot-related behavior, and 4–5 categories of robot-related behavior). Refining the code categories and training coders resulted in increased agreement between coders, but only to a level of moderate reliability, leading to our recommendation that it be used with three coders to assess where there is majority consensus, and thereby correct for subjectivity. We discuss lessons learned in the design and implementation of this protocol and the potential for future child–robot experiments analyzed through vocalization behavior. We also perform a few observational behavior analyses from vocalizations alone to demonstrate the potential of this field. Full article
Show Figures

Figure 1

19 pages, 16005 KiB  
Article
Comparative Assessment of Neural Radiance Fields and Photogrammetry in Digital Heritage: Impact of Varying Image Conditions on 3D Reconstruction
by Valeria Croce, Dario Billi, Gabriella Caroti, Andrea Piemonte, Livio De Luca and Philippe Véron
Remote Sens. 2024, 16(2), 301; https://doi.org/10.3390/rs16020301 - 11 Jan 2024
Cited by 12 | Viewed by 4346
Abstract
This paper conducts a comparative evaluation between Neural Radiance Fields (NeRF) and photogrammetry for 3D reconstruction in the cultural heritage domain. Focusing on three case studies, of which the Terpsichore statue serves as a pilot case, the research assesses the quality, consistency, and [...] Read more.
This paper conducts a comparative evaluation between Neural Radiance Fields (NeRF) and photogrammetry for 3D reconstruction in the cultural heritage domain. Focusing on three case studies, of which the Terpsichore statue serves as a pilot case, the research assesses the quality, consistency, and efficiency of both methods. The results indicate that, under conditions of reduced input data or lower resolution, NeRF outperforms photogrammetry in preserving completeness and material description for the same set of input images (with known camera poses). The study recommends NeRF for scenarios requiring extensive area mapping with limited images, particularly in emergency situations. Despite NeRF’s developmental stage compared to photogrammetry, the findings demonstrate higher potential for describing material characteristics and rendering homogeneous textures with enhanced visual fidelity and accuracy; however, NeRF seems more prone to noise effects. The paper advocates for the future integration of NeRF with photogrammetry to address respective limitations, offering more comprehensive representation for cultural heritage preservation tasks. Future developments include extending applications to planar surfaces and exploring NeRF in virtual and augmented reality, as well as studying NeRF evolution in line with emerging trends in semantic segmentation and in-the-wild scene reconstruction. Full article
(This article belongs to the Special Issue Photogrammetry Meets AI)
Show Figures

Figure 1

19 pages, 18837 KiB  
Article
Detecting Deceptive Dark-Pattern Web Advertisements for Blind Screen-Reader Users
by Satwik Ram Kodandaram, Mohan Sunkara, Sampath Jayarathna and Vikas Ashok
J. Imaging 2023, 9(11), 239; https://doi.org/10.3390/jimaging9110239 - 6 Nov 2023
Cited by 6 | Viewed by 4791
Abstract
Advertisements have become commonplace on modern websites. While ads are typically designed for visual consumption, it is unclear how they affect blind users who interact with the ads using a screen reader. Existing research studies on non-visual web interaction predominantly focus on general [...] Read more.
Advertisements have become commonplace on modern websites. While ads are typically designed for visual consumption, it is unclear how they affect blind users who interact with the ads using a screen reader. Existing research studies on non-visual web interaction predominantly focus on general web browsing; the specific impact of extraneous ad content on blind users’ experience remains largely unexplored. To fill this gap, we conducted an interview study with 18 blind participants; we found that blind users are often deceived by ads that contextually blend in with the surrounding web page content. While ad blockers can address this problem via a blanket filtering operation, many websites are increasingly denying access if an ad blocker is active. Moreover, ad blockers often do not filter out internal ads injected by the websites themselves. Therefore, we devised an algorithm to automatically identify contextually deceptive ads on a web page. Specifically, we built a detection model that leverages a multi-modal combination of handcrafted and automatically extracted features to determine if a particular ad is contextually deceptive. Evaluations of the model on a representative test dataset and ‘in-the-wild’ random websites yielded F1 scores of 0.86 and 0.88, respectively. Full article
(This article belongs to the Special Issue Image and Video Processing for Blind and Visually Impaired)
Show Figures

Figure 1

24 pages, 2781 KiB  
Article
Applying Few-Shot Learning for In-the-Wild Camera-Trap Species Classification
by Haoyu Chen, Stacy Lindshield, Papa Ibnou Ndiaye, Yaya Hamady Ndiaye, Jill D. Pruetz and Amy R. Reibman
AI 2023, 4(3), 574-597; https://doi.org/10.3390/ai4030031 - 31 Jul 2023
Cited by 1 | Viewed by 2125
Abstract
Few-shot learning (FSL) describes the challenge of learning a new task using a minimum amount of labeled data, and we have observed significant progress made in this area. In this paper, we explore the effectiveness of the FSL theory by considering a real-world [...] Read more.
Few-shot learning (FSL) describes the challenge of learning a new task using a minimum amount of labeled data, and we have observed significant progress made in this area. In this paper, we explore the effectiveness of the FSL theory by considering a real-world problem where labels are hard to obtain. To assist a large study on chimpanzee hunting activities, we aim to classify various animal species that appear in our in-the-wild camera traps located in Senegal. Using the philosophy of FSL, we aim to train an FSL network to learn to separate animal species using large public datasets and implement the network on our data with its novel species/classes and unseen environments, needing only to label a few images per new species. Here, we first discuss constraints and challenges caused by having in-the-wild uncurated data, which are often not addressed in benchmark FSL datasets. Considering these new challenges, we create two experiments and corresponding evaluation metrics to determine a network’s usefulness in a real-world implementation scenario. We then compare results from various FSL networks, and describe how factors may affect a network’s potential real-world usefulness. We consider network design factors such as distance metrics or extra pre-training, and examine their roles in a real-world implementation setting. We also consider additional factors such as support set selection and ease of implementation, which are usually ignored when a benchmark dataset has been established. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

13 pages, 3077 KiB  
Article
Examining Participant Adherence with Wearables in an In-the-Wild Setting
by Hannah R. Nolasco, Andrew Vargo, Niklas Bohley, Christian Brinkhaus and Koichi Kise
Sensors 2023, 23(14), 6479; https://doi.org/10.3390/s23146479 - 18 Jul 2023
Cited by 2 | Viewed by 1688
Abstract
Wearable devices offer a wealth of data for ubiquitous computing researchers. For instance, sleep data from a wearable could be used to identify an individual’s harmful habits. Recently, devices which are unobtrusive in size, setup, and maintenance are becoming commercially available. However, most [...] Read more.
Wearable devices offer a wealth of data for ubiquitous computing researchers. For instance, sleep data from a wearable could be used to identify an individual’s harmful habits. Recently, devices which are unobtrusive in size, setup, and maintenance are becoming commercially available. However, most data validation for these devices come from brief, short-term laboratory studies or experiments which have unrepresentative samples that are also inaccessible to most researchers. For wearables research conducted in-the-wild, the prospect of running a study has the risk of financial costs and failure. Thus, when researchers conduct in-the-wild studies, the majority of participants tend to be university students. In this paper, we present a month-long in-the-wild study with 31 Japanese adults who wore a sleep tracking device called the Oura ring. The high device usage results found in this study can be used to inform the design and deployment of longer-term mid-size in-the-wild studies. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

12 pages, 1715 KiB  
Article
Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild
by Yibo He, Kah Phooi Seng and Li Minn Ang
Sensors 2023, 23(4), 1834; https://doi.org/10.3390/s23041834 - 7 Feb 2023
Cited by 8 | Viewed by 3727
Abstract
This paper investigates multimodal sensor architectures with deep learning for audio-visual speech recognition, focusing on in-the-wild scenarios. The term “in the wild” is used to describe AVSR for unconstrained natural-language audio streams and video-stream modalities. Audio-visual speech recognition (AVSR) is a speech-recognition task [...] Read more.
This paper investigates multimodal sensor architectures with deep learning for audio-visual speech recognition, focusing on in-the-wild scenarios. The term “in the wild” is used to describe AVSR for unconstrained natural-language audio streams and video-stream modalities. Audio-visual speech recognition (AVSR) is a speech-recognition task that leverages both an audio input of a human voice and an aligned visual input of lip motions. However, since in-the-wild scenarios can include more noise, AVSR’s performance is affected. Here, we propose new improvements for AVSR models by incorporating data-augmentation techniques to generate more data samples for building the classification models. For the data-augmentation techniques, we utilized a combination of conventional approaches (e.g., flips and rotations), as well as newer approaches, such as generative adversarial networks (GANs). To validate the approaches, we used augmented data from well-known datasets (LRS2—Lip Reading Sentences 2 and LRS3) in the training process and testing was performed using the original data. The study and experimental results indicated that the proposed AVSR model and framework, combined with the augmentation approach, enhanced the performance of the AVSR framework in the wild for noisy datasets. Furthermore, in this study, we discuss the domains of automatic speech recognition (ASR) architectures and audio-visual speech recognition (AVSR) architectures and give a concise summary of the AVSR models that have been proposed. Full article
Show Figures

Figure 1

15 pages, 718 KiB  
Article
ArbGaze: Gaze Estimation from Arbitrary-Sized Low-Resolution Images
by Hee Gyoon Kim and Ju Yong Chang
Sensors 2022, 22(19), 7427; https://doi.org/10.3390/s22197427 - 30 Sep 2022
Cited by 1 | Viewed by 2392
Abstract
The goal of gaze estimation is to estimate a gaze vector from an image containing a face or eye(s). Most existing studies use pre-defined fixed-resolution images to estimate the gaze vector. However, images captured from in-the-wild environments may have various resolutions, and variation [...] Read more.
The goal of gaze estimation is to estimate a gaze vector from an image containing a face or eye(s). Most existing studies use pre-defined fixed-resolution images to estimate the gaze vector. However, images captured from in-the-wild environments may have various resolutions, and variation in resolution can degrade gaze estimation performance. To address this problem, a gaze estimation method from arbitrary-sized low-resolution images is proposed. The basic idea of the proposed method is to combine knowledge distillation and feature adaptation. Knowledge distillation helps the gaze estimator for arbitrary-sized images generate a feature map similar to that from a high-resolution image. Feature adaptation makes creating a feature map adaptive to various resolutions of an input image possible by using a low-resolution image and its scale information together. It is shown that combining these two ideas improves gaze estimation performance substantially in the ablation study. It is also demonstrated that the proposed method can be generalized to other popularly used gaze estimation models through experiments using various backbones. Full article
Show Figures

Figure 1

30 pages, 2161 KiB  
Article
Investigation into Phishing Risk Behaviour among Healthcare Staff
by Prosper Kandabongee Yeng, Muhammad Ali Fauzi, Bian Yang and Peter Nimbe
Information 2022, 13(8), 392; https://doi.org/10.3390/info13080392 - 18 Aug 2022
Cited by 8 | Viewed by 4820
Abstract
A phishing attack is one of the less complicated ways to circumvent sophisticated technical security measures. It is often used to exploit psychological (as as well as other) factors of human users to succeed in social engineering attacks including ransomware. Guided by the [...] Read more.
A phishing attack is one of the less complicated ways to circumvent sophisticated technical security measures. It is often used to exploit psychological (as as well as other) factors of human users to succeed in social engineering attacks including ransomware. Guided by the state-of-the-arts in a phishing simulation study in healthcare and after deeply assessing the ethical dilemmas, an SMS-based phishing simulation was conducted among healthcare workers in Ghana. The study adopted an in-the-wild study approach alongside quantitative and qualitative surveys. From the state-of-the-art studies, the in-the-wild study approach was the most commonly used method as compared to laboratory-based experiments and statistical surveys because its findings are generally reliable and effective. The attack results also showed that 61% of the targeted healthcare staff were susceptible, and some of the healthcare staff were not victims of the attack because they prioritized patient care and were not susceptible to the simulated phishing attack. Through structural equation modelling, the workload was estimated to have a significant effect on self-efficacy risk (r = 0.5, p-value = 0.05) and work emergency predicted a perceived barrier in the reverse direction at a substantial level of r = −0.46, p-value = 0.00. Additionally, Pearson’s correlation showed that the perceived barrier was a predictor of self-reported security behaviour in phishing attacks among healthcare staff. As a result, various suggestions including an extra workload balancing layer of security controls in emergency departments and better security training were suggested to enhance staff’s conscious care behaviour. Full article
(This article belongs to the Section Information Security and Privacy)
Show Figures

Figure 1

34 pages, 978 KiB  
Review
Macro- and Micro-Expressions Facial Datasets: A Survey
by Hajer Guerdelli, Claudio Ferrari, Walid Barhoumi, Haythem Ghazouani and Stefano Berretti
Sensors 2022, 22(4), 1524; https://doi.org/10.3390/s22041524 - 16 Feb 2022
Cited by 28 | Viewed by 8621
Abstract
Automatic facial expression recognition is essential for many potential applications. Thus, having a clear overview on existing datasets that have been investigated within the framework of face expression recognition is of paramount importance in designing and evaluating effective solutions, notably for neural networks-based [...] Read more.
Automatic facial expression recognition is essential for many potential applications. Thus, having a clear overview on existing datasets that have been investigated within the framework of face expression recognition is of paramount importance in designing and evaluating effective solutions, notably for neural networks-based training. In this survey, we provide a review of more than eighty facial expression datasets, while taking into account both macro- and micro-expressions. The proposed study is mostly focused on spontaneous and in-the-wild datasets, given the common trend in the research is that of considering contexts where expressions are shown in a spontaneous way and in a real context. We have also provided instances of potential applications of the investigated datasets, while putting into evidence their pros and cons. The proposed survey can help researchers to have a better understanding of the characteristics of the existing datasets, thus facilitating the choice of the data that best suits the particular context of their application. Full article
(This article belongs to the Special Issue Feature Papers in the Sensing and Imaging Section 2021)
Show Figures

Figure 1

20 pages, 5330 KiB  
Article
Semi-Supervised Anomaly Detection in Video-Surveillance Scenes in the Wild
by Mohammad Ibrahim Sarker, Cristina Losada-Gutiérrez, Marta Marrón-Romera, David Fuentes-Jiménez and Sara Luengo-Sánchez
Sensors 2021, 21(12), 3993; https://doi.org/10.3390/s21123993 - 9 Jun 2021
Cited by 20 | Viewed by 4559
Abstract
Surveillance cameras are being installed in many primary daily living places to maintain public safety. In this video-surveillance context, anomalies occur only for a very short time, and very occasionally. Hence, manual monitoring of such anomalies may be exhaustive and monotonous, resulting in [...] Read more.
Surveillance cameras are being installed in many primary daily living places to maintain public safety. In this video-surveillance context, anomalies occur only for a very short time, and very occasionally. Hence, manual monitoring of such anomalies may be exhaustive and monotonous, resulting in a decrease in reliability and speed in emergency situations due to monitor tiredness. Within this framework, the importance of automatic detection of anomalies is clear, and, therefore, an important amount of research works have been made lately in this topic. According to these earlier studies, supervised approaches perform better than unsupervised ones. However, supervised approaches demand manual annotation, making dependent the system reliability of the different situations used in the training (something difficult to set in anomaly context). In this work, it is proposed an approach for anomaly detection in video-surveillance scenes based on a weakly supervised learning algorithm. Spatio-temporal features are extracted from each surveillance video using a temporal convolutional 3D neural network (T-C3D). Then, a novel ranking loss function increases the distance between the classification scores of anomalous and normal videos, reducing the number of false negatives. The proposal has been evaluated and compared against state-of-art approaches, obtaining competitive performance without fine-tuning, which also validates its generalization capability. In this paper, the proposal design and reliability is presented and analyzed, as well as the aforementioned quantitative and qualitative evaluation in-the-wild scenarios, demonstrating its high sensitivity in anomaly detection in all of them. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Classification and Tracking)
Show Figures

Figure 1

28 pages, 1401 KiB  
Article
An Efficient 3D Human Pose Retrieval and Reconstruction from 2D Image-Based Landmarks
by Hashim Yasin and Björn Krüger
Sensors 2021, 21(7), 2415; https://doi.org/10.3390/s21072415 - 1 Apr 2021
Cited by 5 | Viewed by 4592
Abstract
We propose an efficient and novel architecture for 3D articulated human pose retrieval and reconstruction from 2D landmarks extracted from a 2D synthetic image, an annotated 2D image, an in-the-wild real RGB image or even a hand-drawn sketch. Given 2D joint positions in [...] Read more.
We propose an efficient and novel architecture for 3D articulated human pose retrieval and reconstruction from 2D landmarks extracted from a 2D synthetic image, an annotated 2D image, an in-the-wild real RGB image or even a hand-drawn sketch. Given 2D joint positions in a single image, we devise a data-driven framework to infer the corresponding 3D human pose. To this end, we first normalize 3D human poses from Motion Capture (MoCap) dataset by eliminating translation, orientation, and the skeleton size discrepancies from the poses and then build a knowledge-base by projecting a subset of joints of the normalized 3D poses onto 2D image-planes by fully exploiting a variety of virtual cameras. With this approach, we not only transform 3D pose space to the normalized 2D pose space but also resolve the 2D-3D cross-domain retrieval task efficiently. The proposed architecture searches for poses from a MoCap dataset that are near to a given 2D query pose in a definite feature space made up of specific joint sets. These retrieved poses are then used to construct a weak perspective camera and a final 3D posture under the camera model that minimizes the reconstruction error. To estimate unknown camera parameters, we introduce a nonlinear, two-fold method. We exploit the retrieved similar poses and the viewing directions at which the MoCap dataset was sampled to minimize the projection error. Finally, we evaluate our approach thoroughly on a large number of heterogeneous 2D examples generated synthetically, 2D images with ground-truth, a variety of real in-the-wild internet images, and a proof of concept using 2D hand-drawn sketches of human poses. We conduct a pool of experiments to perform a quantitative study on PARSE dataset. We also show that the proposed system yields competitive, convincing results in comparison to other state-of-the-art methods. Full article
(This article belongs to the Special Issue Sensors for Posture and Human Motion Recognition)
Show Figures

Figure 1

28 pages, 1959 KiB  
Article
Exploring the Role of Trust and Expectations in CRI Using In-the-Wild Studies
by Paulina Zguda, Anna Kołota, Gentiane Venture, Bartlomiej Sniezynski and Bipin Indurkhya
Cited by 7 | Viewed by 3774
Abstract
Studying interactions of children with humanoid robots in familiar spaces in natural contexts has become a key issue for social robotics. To fill this need, we conducted several Child–Robot Interaction (CRI) events with the Pepper robot in Polish and Japanese kindergartens. In this [...] Read more.
Studying interactions of children with humanoid robots in familiar spaces in natural contexts has become a key issue for social robotics. To fill this need, we conducted several Child–Robot Interaction (CRI) events with the Pepper robot in Polish and Japanese kindergartens. In this paper, we explore the role of trust and expectations towards the robot in determining the success of CRI. We present several observations from the video recordings of our CRI events and the transcripts of free-format question-answering sessions with the robot using the Wizard-of-Oz (WOZ) methodology. From these observations, we identify children’s behaviors that indicate trust (or lack thereof) towards the robot, e.g., challenging behavior of a robot or physical interactions with it. We also gather insights into children’s expectations, e.g., verifying expectations as a causal process and an agency or expectations concerning the robot’s relationships, preferences and physical and behavioral capabilities. Based on our experiences, we suggest some guidelines for designing more effective CRI scenarios. Finally, we argue for the effectiveness of in-the-wild methodologies for planning and executing qualitative CRI studies. Full article
(This article belongs to the Special Issue Applications and Trends in Social Robotics)
Show Figures

Figure 1

12 pages, 2325 KiB  
Article
Modeling Fabric Movement for Future E-Textile Sensors
by Roope Ketola, Vigyanshu Mishra and Asimina Kiourti
Sensors 2020, 20(13), 3735; https://doi.org/10.3390/s20133735 - 3 Jul 2020
Cited by 5 | Viewed by 4413
Abstract
Studies with e-textile sensors embedded in garments are typically performed on static and controlled phantom models that do not reflect the dynamic nature of wearables. Instead, our objective was to understand the noise e-textile sensors would experience during real-world scenarios. Three types of [...] Read more.
Studies with e-textile sensors embedded in garments are typically performed on static and controlled phantom models that do not reflect the dynamic nature of wearables. Instead, our objective was to understand the noise e-textile sensors would experience during real-world scenarios. Three types of sleeves, made of loose, tight, and stretchy fabrics, were applied to a phantom arm, and the corresponding fabric movement was measured in three dimensions using physical markers and image-processing software. Our results showed that the stretchy fabrics allowed for the most consistent and predictable clothing-movement (average displacement of up to −2.3 ± 0.1 cm), followed by tight fabrics (up to −4.7 ± 0.2 cm), and loose fabrics (up to −3.6 ± 1.0 cm). In addition, the results demonstrated better performance of higher elasticity (average displacement of up to −2.3 ± 0.1 cm) over lower elasticity (average displacement of up to −3.8 ± 0.3 cm) stretchy fabrics. For a case study with an e-textile sensor that relies on wearable loops to monitor joint flexion, our modeling indicated errors as high as 65.7° for stretchy fabric with higher elasticity. The results from this study can (a) help quantify errors of e-textile sensors operating “in-the-wild,” (b) inform decisions regarding the optimal type of clothing-material used, and (c) ultimately empower studies on noise calibration for diverse e-textile sensing applications. Full article
(This article belongs to the Special Issue Emerging Wearable Sensor Technology in Healthcare)
Show Figures

Figure 1

20 pages, 1716 KiB  
Article
When Personalization Is Not an Option: An In-The-Wild Study on Persuasive News Recommendation
by Cristina Gena, Pierluigi Grillo, Antonio Lieto, Claudio Mattutino and Fabiana Vernero
Information 2019, 10(10), 300; https://doi.org/10.3390/info10100300 - 26 Sep 2019
Cited by 25 | Viewed by 6672
Abstract
Aiming at granting wide access to their contents, online information providers often choose not to have registered users, and therefore must give up personalization. In this paper, we focus on the case of non-personalized news recommender systems, and explore persuasive techniques that can, [...] Read more.
Aiming at granting wide access to their contents, online information providers often choose not to have registered users, and therefore must give up personalization. In this paper, we focus on the case of non-personalized news recommender systems, and explore persuasive techniques that can, nonetheless, be used to enhance recommendation presentation, with the aim of capturing the user’s interest on suggested items leveraging the way news is perceived. We present the results of two evaluations “in the wild”, carried out in the context of a real online magazine and based on data from 16,134 and 20,933 user sessions, respectively, where we empirically assessed the effectiveness of persuasion strategies which exploit logical fallacies and other techniques. Logical fallacies are inferential schemes known since antiquity that, even if formally invalid, appear as plausible and are therefore psychologically persuasive. In particular, our evaluations allowed us to compare three persuasive scenarios based on the Argumentum Ad Populum fallacy, on a modified version of the Argumentum ad Populum fallacy (Group-Ad Populum), and on no fallacy (neutral condition), respectively. Moreover, we studied the effects of the Accent Fallacy (in its visual variant), and of positive vs. negative Framing. Full article
(This article belongs to the Special Issue Personalizing Persuasive Technologies)
Show Figures

Figure 1

Back to TopTop