Keywords

1 Introduction

In live events, particularly during sports viewings, essential information such as game status and in-venue guidance is primarily conveyed through audio. Surveys focusing on support for individuals who are deaf or hard of hearing (DHH) [5] have shown that the inability to hear announcements and understand detailed game situations significantly detracts from their experience, leading to considerable inconvenience and dissatisfaction. Similarly, for individuals who are blind or have low vision (BLV), the lack of adequate visual information or suitable alternatives can significantly diminish the enjoyment of watching sports, presenting a considerable challenge. Thus, providing accessible information by professionals or volunteers is crucial for DHH and BLV individuals to fully appreciate and enjoy sports events, ensuring they can experience the complete joy of the spectacle.

Due to cost and human resource limitations, providing specialized staff for information support at small-scale sports events in Japan is challenging. Converting Japanese speech to text is more complex than English, as it includes kanji, hiragana, and katakana and has many homophones, which leads to lower accuracy in Japanese speech recognition and a reliance on manual typing, which is slower due to the need for kanji and kana conversion. Collaborative input [10], where multiple staff share typing duties, is used but incurs labor costs. Correcting speech recognition errors also requires multiple staff. While specialized radio terminals with detailed live commentary are available for BLV spectators, such services are limited to specific matches.

Therefore, we propose a mechanism where individuals disseminate and share information regardless of their abilities, providing mutual information support. Specifically, we have developed a real-time timeline posting and viewing system as an accessible web application that allows spectators to share data composed of photos, texts, audio, and videos during sports events. By utilizing a sports viewing timeline aggregated from the data posted by spectators, including the situation of the match and other information, we address the challenges of providing information support to DHH and BLV individuals. Furthermore, the applicability of this system extends to other areas, such as museum visits. This paper details the newly developed web application and presents the results of pilot studies conducted with DHH and BLV participants during sports viewings and museum visits. We analyze the results of these experiments to evaluate the system’s effectiveness and identify areas for future improvement.

2 State of the Art

The web application we developed introduces a system that enhances information sharing for everyone, including DHH and BLV individuals. Specifically, it enables various individuals to contribute information leveraging their strengths without requiring specialized skills, thereby facilitating a synergistic exchange of information. We describe this inclusive and collaborative approach to information sharing, adaptable to all abilities, as ‘Information Accessibility 2.0’. Essentially, this application is not only advantageous for DHH or BLV individuals but is universally beneficial.

Previous studies have proposed non-expert speech-to-text interpretation methods for DHH, such as automatic speech recognition [2], crowdsourcing for speech-to-text/sign-language-to-text interpretation [3, 6], and sign language video-to-text conversion using crowdsourcing [7]. Information support for DHH using crowdsourcing at sports games has also been proposed; prototypes of sports spectator-specific systems for DHH have been evaluated [9]. However, features and accessibility considerations that facilitate ease of use for everyone are limited, and BLV individuals have not been thoroughly evaluated. Despite some proposals for information support for DHH at sports games, such as display systems using Japanese sign language animation synthesis [8], the practices and proposals in this area still need improvement. The importance of captions in sports broadcasting was discussed in [1]. However, issues such as delays in captions and instances where captions obscured essential information were also raised.

Therefore, we aim to develop an inclusive web application to meet these requirements. Our developed application partially conforms to the Web Content Accessibility Guidelines (WCAG) 2.1, targeting at least Level AA [11]. This partial compliance is because we cannot control the content users post. However, to enhance the accessibility of these posts, we have introduced methods such as the automatic generation of alternative content using artificial intelligence (AI), i.e., speech recognition or object recognition, and allowing users other than the poster to input additional information as alternative content.

3 System Overview

Our application provides functionalities such as ‘tags’ and ‘filters’, enabling users to access only the information they need selectively and in real-time. This feature dramatically simplifies the process for users with BLV to access vital information, such as scores or player substitutions. Additionally, when used in museums, setting tags based on location or theme allows for easily sharing desired information. The system is designed with flexibility and user-friendliness; full functionality is available for users who create an account, while those without an account can still post limited content through a browser. This approach accommodates a wide range of information literacy levels.

To use the developed web application, users access the server through a web browser on their smartphone or PC and send/receive timeline data. Communication is facilitated using HTTP and WebSocket protocols. Posts are push-delivered to all users in the same room, and the web browser instantly displays the received posts in a timeline format. Importantly, unlike platforms such as “X”, reloading is unnecessary to see the latest updates. A snapshot of the screen displayed on the user’s web browser is shown in Fig. 1.

Figure 1a depicts the main window. The timeline panel on the left side of the screen displays all posts. Although the original posts are in Japanese, they are automatically translated into English for users who are English speakers. In addition to regular posts with usernames, there are ‘flowing’ posts (in this example, corresponding to “It’s very cold in the venue”). These are messages whose sender cannot be identified, promoting casual posts equivalent to cheers in the venue. On the right, the ‘Commentary’ panel uses a filter function to display only posts tagged with ‘commentary’. It is also possible to display multiple filter panels simultaneously. Additionally, features such as notifications through sound for posts of pre-specified types, voice reading capabilities, and the ability to share posts on “X” are also available. Furthermore, various shortcuts are provided, enabling operation solely through keyboard commands. Authorized users can remove inappropriate posts or ban users who breach posting guidelines. Defining specific words that prevent posting is also possible, further bolstering the platform’s moderation tools.

Figure 1b displays the posting panel, where users can select any tag when making a post. Furthermore, users can add or remove tags after a post has been made. The panel also supports the posting of images, videos, and audio. If specific tags—‘image analysis’ and ‘audio analysis’—are applied to these media posts, an AI automatically analyzes the content and appends additional information in text form. Figure 1c shows the floor list. As part of the system configuration, multiple ‘rooms’, as shown in Fig. 1d, exist within a ‘floor’. Floors are categorized into three types: those that are visible to all, those that are visible only to those who know the URL of the floor (hidden floors), and those that are accessible only to invited members (members only). The same applies to rooms. In addition, floor members can freely create rooms and invite members to members-only rooms. Opening a room displays the main window, as in Fig. 1a. It is anticipated that a dedicated floor will be created for each event, and a dedicated room will be created for each match.

Fig. 1.
4 screenshots of the developed web application. a, the main window has the original post with flowing window to the left. The commentary panel is to the right. b, posting panel with comments and tags. c, floor list with tiles for lab, organization project, and experiment. D, room list with members only.

Snapshot of the Developed Web Application

4 Experiment Design

To evaluate the fundamental effectiveness of the developed system, we conducted demonstration experiments during sports viewing and museum visits. These experiments were carried out with the approval of the research ethics review at the authors’ affiliated institution.

4.1 Sports Viewing Experiment

DHH participants watched handball games while engaging with others by answering questions. On January 14, 2024, an initial analysis was conducted with three DHH university students aged 19–20 using the System Usability Scale (SUS) after two hours of system use. The DHH participants, who had limited experience with sports viewing—two of whom rarely attend sports events and one who attends about 1–2 times a year—spent two hours together. They have an average hearing loss between 95–110 dB in both ears.

For the foundational experiment with BLV individuals, an online environment simulating sports viewing was constructed using Zoom and YouTube live streaming on October 27, 2021. In this experiment, four pairs of BLV participants and one pair of DHH participants watched live-streamed videos of visually impaired bowling from the 17th All Japan Blind Bowling Championship. Each pair was assigned to a breakout room and participated in two sessions, one with and one without the developed web application, to share information while watching the videos.

4.2 Museum Visit Experiment

Six DHH university students aged 19–22 participated alongside five actively engaged hearing experiment collaborators on November 29, 2023. The DHH participants spent two hours together with an average hearing loss between 80–105 dB in both ears. Five visited the museum for the first time, while one had visited 3–4 times.

5 Results and Discussion

The sports viewing and museum visit experiments provide valuable insights into the effectiveness of the developed web application in enhancing information accessibility for DHH and BLV individuals.

5.1 Sports Viewing Experiment

The DHH participants achieved an average SUS score of 73.3 (standard deviation of 7.2), rated as ‘Good’ [4]. The aspects that negatively impacted the score were the ‘need for technical support’ and perceived ‘system inconsistencies’, averaging 5.8 out of 10. Regarding the overall sports viewing visit experience facilitated by the system, two participants awarded it the maximum score of 5, and one gave a 4 on a 5-point Likert scale. When asked about their agreement with the research project’s efforts, responses were favorable, with one participant giving a 5 and two giving 4 s. While the SUS score indicates the system’s potential, further refinements are necessary to improve usability and user experience.

The foundational experiment for BLV individuals demonstrated the web application’s potential to support information accessibility in sports viewing scenarios, with more participants reporting obtaining useful information when using the developed system. However, the negative results regarding participants’ ability to provide information themselves suggest the need for improvements and user training to encourage active participation. Future research should explore ways to facilitate and encourage active participation from BLV individuals, such as providing accessible input methods and offering user training sessions.

5.2 Museum Visit Experiment

The DHH participants evaluated the system based on SUS, resulting in an average score of 55.0 (standard deviation of 14.1), regarded as ‘Poor’ [4]. The main factors for the low evaluation were “easy to use” and “most people would learn to use this system very quickly” (both average scores of 3.8 out of 10). When asked if the overall museum visit experience using the developed system was positive, five out of six participants rated it the highest score of 5 on a 5-point Likert scale, with one participant giving a 4, indicating overall satisfaction with the system. All six participants strongly agreed with the efforts of this research project, giving the highest score of 5. The lower SUS score suggests that initially designed for sports viewing, the system requires optimization for museum settings. The manual switching of tags based on viewing location contributed to the lower usability scores, indicating that exploring automatic tagging methods could enhance the system’s usability in museum contexts.

These results suggest that the developed web application is a promising tool for enhancing information accessibility for DHH and BLV individuals in both sports viewing and museum visit contexts. However, the system requires further improvements and context-specific optimizations to fully realize its potential. Future research should focus on addressing the identified usability issues, exploring automatic tagging methods, and conducting larger-scale experiments to validate the system’s effectiveness in various settings.

6 Conclusion and Future Works

This paper presents a novel web application to enhance information accessibility for DHH and BLV individuals in live events such as sports viewings and museum visits. The results of the sports viewing and museum visit experiments demonstrate the potential of the developed system in enhancing information accessibility for DHH and BLV individuals. However, the experiments also identified areas for improvement, such as the need for technical support, system inconsistencies, and usability issues in the museum setting.

Future work will prioritize incorporating feedback from DHH and BLV individuals to drive the system’s improvement. We will conduct user studies and workshops with DHH and BLV participants to gather insights into their specific needs, preferences, and challenges. This valuable feedback will be used to refine the system’s design, features, and user interface, ensuring that it effectively meets the needs of its target users.

In addition to user feedback, we will explore integrating automatic tagging methods, such as indoor positioning technologies and augmented reality, to enhance the system’s usability and adaptability to different contexts. We will also conduct larger-scale experiments with diverse participant groups and settings to validate the system’s effectiveness and generalizability.

By continuously improving and adapting the web application based on feedback from DHH and BLV individuals, we aim to contribute to developing inclusive technologies that empower individuals with diverse abilities to participate fully in live events and enhance their overall experience.