Next Article in Journal / Special Issue
Exploring AI Amid the Hype: A Critical Reflection Around the Applications and Implications of AI in Journalism
Previous Article in Journal
Impact of the 1742–1743 Plague Epidemic on Global Excess Deaths and Social Dynamics in the City of Córdoba and Along the Camino Real Between Buenos Aires and Lima
Previous Article in Special Issue
Measuring Destination Image Using AI and Big Data: Kastoria’s Image on TripAdvisor
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Intersection of AI, Ethics, and Journalism: Greek Journalists’ and Academics’ Perspectives

by
Panagiota (Naya) Kalfeli
* and
Christina Angeli
School of Journalism & Mass Communications, Aristotle University of Thessaloniki, 54 625 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Submission received: 15 November 2024 / Revised: 21 January 2025 / Accepted: 22 January 2025 / Published: 25 January 2025

Abstract

:
This study aims to explore the perceptions of Greek journalists and academics on the use of artificial intelligence (AI) in Greek journalism, focusing on its benefits, risks, and potential ethical dilemmas. In particular, it seeks to (i) assess the extent of the use of AI tools by Greek journalists; (ii) investigate views on how AI might alter news production, work routines, and labor relations in the field; and (iii) examine perspectives on the ethical challenges of AI in journalism, particularly in regard to AI-generated images in media content. To achieve this, a series of 28 in-depth semi-structured interviews was conducted with Greek journalists and academics. A thematic analysis was employed to identify key themes and patterns. Overall, the findings suggest that AI penetration in Greek journalism is in its early stages, with no formal training, strategy, or framework in place within Greek media. Regarding ethical concerns, there is evident skepticism and caution among journalists and academics about issues, such as, data bias, transparency, privacy, and copyright, which are further intensified by the absence of a regulatory framework.

1. Introduction

“I’m sorry, Dave. I’m afraid I can’t do that”, responds HAL 9000, a superintelligent, sentient computer with a calm human voice, to astronaut Dave Bowman in Kubrick’s epic science fiction movie 2001: A Space Odyssey (1968). In this emblematic scene, Dr. Bowman, who is locked out of the spacecraft, asks Hal to open the pod bay doors, having in mind, however, the deactivation of his iconic artificial intelligence (AI) assistant, as the latter shows signs of malfunction. Sensing the threat, Hal refuses to do so, marking a profound moment of conflict between human control and machine autonomy. Throughout science fiction, AI has often been represented as a double-edged sword, either in the form of a helpful assistant or an evil threat, capable of substituting human logic and behavior with absolute precision. Within this context, Hal’s line in Space Odyssey reflects humanity’s ambivalence about advanced AI, people’s hopes and fears about how technology may affect the world we live in. In the end, in the light of a dystopian future, will this technology, which was born to serve us, gradually lead to humanity’s downfall?
Over the past few decades, AI has moved beyond the sphere of science fiction to become a part of our everyday lives, reshaping nearly every aspect of modern society. It has enhanced our abilities in areas like driving, avoiding traffic, connecting with friends, choosing the perfect movie, and even cooking healthier meals [1]. Today, AI-powered technologies significantly impact industries ranging from scientific discovery and healthcare to smart cities, transport, and sustainability [1]. From virtual or text-based assistants like Siri, Alexa, and ChatGPT that assist with daily tasks to advanced algorithms and machine learning models capable of driving autonomous vehicles or diagnosing diseases with impressive accuracy, AI is transforming our world but also introducing significant challenges and risks.
Among these transformations, its integration into journalism has sparked intense debates on its potential negative impacts on journalism, particularly in terms of content quality and ethical implications [2,3,4]. However, the use of AI in journalism also provides a range of benefits [3]. For example, AI can make journalists’ work more productive and efficient by relieving them from certain routine tasks [5]. That being said, overall, it is important to stress that the discussion on the impact of AI on journalism is part of a larger conversation on media digitization and the transition to a journalistic ecosystem that uses applications, algorithms, and social media in ways that transform traditional journalism [6,7]. In this context, AI technologies, no matter whether their influence is short-term or long-term, are part of a broader framework of technological change reshaping journalism [7].
The existing literature provides extensive insights into the global trends of AI in journalism [8], particularly in countries like the USA and the UK [9]. These studies explore AI’s role in automating news production, enhancing audience engagement, and supporting investigative reporting [2,8]. Despite global advancements in AI-driven journalism, however, there remains a notable gap in empirical research exploring its adoption and ethical challenges [8,9], particularly in localized contexts and smaller media ecosystems like that of Greece, where economic, linguistic, and structural challenges often hinder technological adoption.
This study addresses this gap by investigating the perceptions of Greek journalists and academics regarding AI’s role in journalism, focusing on its integration into the news production process and examining its potential benefits, shortcomings, risks, and associated ethical concerns. The Greek media system is characterized by several structural challenges, including a high concentration of ownership, limited financial resources, and a difficult economic environment, particularly following the financial crisis of 2008–2012. These challenges have led to significant job losses and the closure of major outlets [10]. Furthermore, the adoption of new technologies has been relatively slow [11], constrained by economic limitations and a lack of investment in innovation. Within this context, by adding Greece’s experience as a microcosm for understanding AI’s impact on journalism, this research enriches the international literature on the future of AI in the media industry. It offers valuable insights not only for pioneering environments but also for media systems facing economic strain and technological transition, highlighting both opportunities and challenges in contexts marked by limited resources and linguistic diversity.
To bridge this gap, this research seeks to answer the following key questions:
  • What are the perceptions and experiences of Greek journalists and academics regarding the application of AI in journalism?
  • What are their perceptions regarding the ethical challenges and dilemmas raised by the use of AI in journalism?
In particular, this paper aims to, first, examine the degree of utilization of AI tools by Greek journalists; second, investigate the perceptions of journalists and academics regarding the changes that AI will bring about in news production models, work routines, and labor relations in journalism; and third, explore the perceptions of journalists and academics regarding the ethical dilemmas and challenges raised by the use of AI in journalism and especially by AI-generated images in media content. For this purpose, a series of 28 in-depth semi-structured interviews with a sample of Greek journalists and academics was conducted. Thematic analysis was used to identify common themes, patterns, and ideas that stood out.
In the following sections, this paper initially examines the most representative research that has looked into journalism and AI. Subsequently, the methodology of this study is presented, and the findings are discussed, highlighting the ethical challenges in applying AI to journalism.

2. Literature Review

2.1. Historical Context of the Role of Technology in Journalism

Throughout media history, technologies of all kinds (including print, radio, and television technologies) have been crucial to the development of journalism and the media [12,13]. It was not too long ago that newsrooms did not have access to contemporary tools such as smart phones, voice recorders, cameras, email, and the internet. In fact, even fifteen years ago, Mo Jos (for mobile journalists) rarely appeared in the newsroom, equipped with a laptop, digital and video cameras, and the means to upload content directly to the website [14]. But as technology advanced quickly, journalists had to adapt their techniques as well [15]. In recent decades, technology has witnessed an exceptionally rapid growth, bringing about transformative changes in various aspects of human life. At the same time, it has affected the way journalists engage with the world outside the newsroom, while digital tools have made it easier to manage editorial tasks like editing, proofreading, visualizing, and content design [16].

2.2. The Rise of AI in Journalism

As happened in the past with other technologies, recently, the intersection of journalism and AI has become one of the most important media issues of our time. With significant turning points and technical breakthroughs, the history of AI in journalism is a journey of innovation, challenges, and developing capacities. This development can be divided into discrete stages, each of which made a substantial contribution to the field [17]. But how do we understand AI? As magic? A “black box”? Or as a set of technologies, ideas, and techniques at the service of humans?
The conceptual foundation for artificial intelligence (AI) was established by Alan Turing, who is widely regarded as the father of computer science and AI. In his 1950 paper Computing Machinery and Intelligence, Turing introduced the idea that computers could exhibit intelligent behavior and proposed the Turing test as a way to measure whether a machine can imitate human responses. Building on Turing’s pioneering work, John McCarthy, who coined the term “artificial intelligence” in 1956 [4], defined AI as “the science and engineering of making intelligent machines”, with the aim to create systems that perform tasks that typically require human intelligence. This definition has endured, and similarly, Beckett describes AI as a “collection of ideas, technologies, and techniques related to a computer system’s ability to perform tasks usually requiring human intelligence” [18] (p. 16). These tasks include learning, reasoning, problem-solving, perception, and language understanding, among others. In order to complete such tasks, these cognitive technologies employ two fundamental characteristics: (a) autonomy and (b) the ability to learn via experience [4].
In journalism, the use of AI is described by various terms, with each one emphasizing different aspects of its application in the field. Some of the most common terms include automated journalism, which refers to the use of algorithms and software to automatically generate news articles from data, such as reports on financial earnings, sports results, and other data-driven stories without human intervention [19]; algorithmic journalism, which not only involves automated writing but also the use of algorithms for tasks like data analysis, content recommendation, and audience engagement [20]; computational journalism, which includes the use of computational techniques for gathering, analyzing, and presenting news [16]; and robot journalism, which emphasizes the role of “robots” or AI systems in generating news content, often focusing on the narrative that machines are taking over certain aspects of journalistic work [21].
One of the pioneering efforts to employ AI in journalism was the initiative of the Associated Press (AP) in the early 2010s to partner with Automated Insights and use their technology [17], allowing the news agency to automatically generate earnings reports for thousands of companies, a task that had been time-consuming for journalists until then [22]. The application of AI allowed AP—and other news organizations, such as Reuters [17]—to quickly produce accurate and consistent reports on financial data, sports results, and other structured datasets, freeing up human journalists to “do more journalism and less data processing”, as was stressed by the managing director of the agency Lou Ferrara [21]. According to Sun et al. [23], Reuters, The New York Times, and other significant global mainstream media typically view the “intelligent editing process” as an important innovation that focuses on fusing AI technology with news primary business.

2.3. AI in Newsgathering, Production, and Distribution

The body of literature on AI and journalism has been expanding quickly. Numerous studies worldwide aim to explore the application of AI to journalism, examining both the opportunities and challenges presented by this new technological development. According to Beckett [18], AI in journalism operates in three main areas: newsgathering, production, and the distribution of news. In his comprehensive survey of AI and related technologies across 71 news organizations from 32 different countries, just under half of the respondents reported using AI for newsgathering, two-thirds for production, and just over half for news distribution [18].
AI activities for newsgathering include not only collecting material and sourcing information but also assisting editorial teams in assessing what will interest users, generating story ideas, identifying trends, and extracting information or content [18]. At the same time, one of the ways AI is transforming journalism is in news production, where algorithms can generate news stories without human intervention. This technology, as seen in the example of the Associated Press, is particularly useful for producing data-driven reports such as financial reports, sports updates, and weather forecasts. According to Clerwall [24], automated journalism can produce content at a speed and volume that surpasses human capability, thus allowing news organizations to cover more topics and provide updates. Moreover, AI tools such as natural language processing (NLP) and machine learning are helping journalists in their research and writing processes. These tools can analyze large datasets, identify trends, and even suggest story angles. In addition, Casswell and Dörr [19] note that AI can enhance investigative journalism by uncovering hidden patterns in vast amounts of data and, as a result, enable journalists to produce more in-depth and comprehensive stories.
At the same time, AI plays a significant role in the dissemination of news. Machine learning algorithms are used by personalized news delivery systems to customize content to individual preferences and behaviors. Meijer and Kormelink [25] note that by presenting readers with news articles that are relevant to their interests, these systems can enhance user engagement and raise the likelihood that users will consume this news content. However, the use of AI in news distribution also raises concerns, among others, about the creation of “filter bubbles” [26], where users are only exposed to news that aligns with their existing beliefs and opinions. This can contribute to a fragmented and polarized information environment. Pariser [27] warned of the dangers of personalized algorithms that prioritize user preferences over journalistic objectivity, potentially undermining the democratic role of the media in providing a balanced and diverse range of perspectives.

2.4. Journalism Ethics as a Theoretical Foundation

This study, following the approach of previous research [28], employs journalism ethics as its theoretical and conceptual framework to examine journalistic practices, emphasizing the need for a critical evaluation of the ethical challenges posed by the growing use of AI in journalism. Journalism ethics refers to the application of ethical principles that govern the social practice of journalism across its diverse technological forms [29]. As these technological forms continue to shape the core elements of journalism, adopting an ethical perspective is essential for evaluating their impact on journalistic values, integrity, and the broader societal implications of information dissemination [30].
Regarding the ethical challenges and dilemmas raised by the application of AI to journalism, a significant concern is the potential loss of jobs for journalists who will be replaced by automated systems [18]. The transparency and accountability of AI-generated content also emerge as critical issues. Diakopoulos [5] suggests that news organizations should prioritize transparency by revealing the use of AI and explaining how articles are produced, a step essential to maintaining public trust in the media. Another ethical consideration is the risk that AI could perpetuate biases present in the data it is trained on [17]. Noble [31], for instance, argues that AI systems can reinforce existing social biases and inequalities if they are not carefully monitored. This issue is particularly alarming in journalism, where unbiased and fair reporting is a fundamental principle.
By grounding the analysis in the framework of journalism ethics, this study aims to provide a nuanced understanding of the ethical challenges arising at the intersection of AI and journalistic practices in the modern media landscape. The adoption of this framework proves essential in several ways. Deuze [32], for example, highlights that journalism ethics provides a normative structure designed to safeguard core journalistic values. In this context, a normative framework refers to a set of guidelines or values that protect the fundamental pillars of journalism, such as fairness, impartiality, accuracy, and integrity. These core principles serve as the basis for evaluating the ethical implications of integrating AI tools into journalistic practices. In practical terms, this framework enables us to assess whether the use of AI in journalism aligns with established ethical standards, ensuring that automation and technological advances do not compromise the quality or integrity of journalistic work.
Despite these developments in AI, human beings will continue to play a crucial role in journalism. Complex stories frequently require a level of interpretation that AI cannot provide at this moment. At the same time, human journalists are crucial in ethical decision-making. They are essential in deciding the framing of news stories and making sure that they are in the public interest and compliant with ethical norms. This is especially crucial in delicate situations, where the public’s right to know must be balanced against the possibility of harm [17].

3. Method

This study is based on 28 semi-structured in-depth interviews with 12 journalists from different Greek media outlets (online, television, newspapers) and 16 academics from different Greek universities. The inclusion of both journalists and academics in this study was a deliberate choice, aiming to provide a comprehensive analysis of the intersection of AI, ethics, and journalism within the Greek context. Academics, as experts in both media systems and AI technologies, offer critical theoretical perspectives that complement the practical insights of journalists. This dual approach allows for a deeper understanding of the topic, bridging the gap between theoretical knowledge and practical application. In particular, academics can identify broader implications, regulatory challenges, and ethical concerns that may not be immediately apparent to practitioners. This sampling choice has been effectively employed in previous studies on the subject, including those conducted by Noain-Sanchez [4], Fanta [33], and Sandoval-Martín and La-Rosa [34], which highlighted the changing role of AI in journalism and its ethical implications.
The final sample consisted of 28 participants, including 16 males and 12 females, ensuring a balanced gender representation. Participants ranged in age from their 20s to their 50s, encompassing both early career and experienced professionals in their respective fields. The journalists in the sample worked across diverse platforms, including online news outlets, newspapers, television, and freelance journalism, holding roles such as editors, investigative journalists, reporters, and content creators. The academics’ research interests included media ethics, AI in journalism, and digital transformation in media systems, with several participants occupying leadership positions in academic institutions.
To maintain anonymity, all participants are referred to under the code PJ (for participating journalists) or PA (for participating academics), together with a serial number according to the order in which the interview was conducted (see Appendix A for more details). In all interviews, a previously designed interview guide (see Appendix B) that was designed ad hoc for the project was followed and applied in a semi-structured manner. Two different versions of the guide were constructed, one for journalists and one for academics. Both guides shared a common structure but were customized to address specific aspects of each group’s experiences and expertise. The use of two guides ensured that the study captured diverse perspectives while maintaining consistency across thematic axes.
The interview guide’s structure comprised six axes, each based on previous scholarly research, ensuring that the questions were grounded in established theoretical frameworks while also addressing key areas relevant to the research topic. The first axis explores the profile of the participants [28] and either their relationship with technology in their journalistic work (for journalists) or their research interests on journalism and AI (for academics) [4,33,34]. The second axis examines the perceptions of the participants regarding the major changes that have taken place in the journalism landscape over the last decade and the role of technology in these changes [18]. The third axis seeks to investigate the perceptions and experiences of the participants in regard to the application of AI in the Greek journalistic context in particular (use of AI in Greek journalism, types of AI applications used, training of journalists in AI, media planning for AI integration) [18,28,33]. The fourth axis explores the perceptions of the participants regarding the changes (positive and negative) that the implementation of AI in journalism will bring about. The fifth axis examines participants’ perceptions of the ethical challenges and dilemmas raised by the use of AI in journalism [18,28].
Finally, a sixth axis seeks to investigate the perceptions of the participants regarding the use of content created with the help of AI in journalism. Within this context, participants were shown two images that were created using AI based on thirty-two witness statements from refugees and migrants detained in Australian offshore facilities on Nauru and Manus Island, to which (facilities) journalists did not have access. AI-generated images represented the squalid living conditions inside the camps and depicted repeated incidents of physical and verbal abuse of refugees and migrants. This project was created by refugees’ and migrants’ advocates but was subsequently utilized by major international news organizations who wanted to highlight this issue. This content was selected due to the ethical concerns it raises. A set of questions was used to examine how participants evaluate the use of this kind of content in journalism, what kinds of advantages and disadvantages they recognize from such use of AI, and, most importantly, what, in their opinion, is an ethical way of using these images.
In this study, semi-structured interviews were utilized to facilitate focused and purposeful discussions. As indicated by prior scholarly research [28], this method offered the flexibility required to delve deeper into the participants’ perspectives, interests, and areas of expertise. All interviews were conducted in Greek (the participants’ native language) via Zoom videoconferences between May and September 2024, with an average duration of 30 to 40 min. Zoom allowed for the inclusion of a diverse range of participants from various geographical locations, ensuring that the study was not limited by physical boundaries. A letter of invitation containing an informed consent form was filled out by every participant before the interview took place. Both the informed consent form and the interview guide were approved by the University’s Research Ethics Committee (Protocol Number: 211411//2024). After the interviews were conducted, transcription was carried out in two stages: first, in an automated way using the tool turboscribe.ai, a speech transcription program, and second, manually to validate the first version and complete the missing elements.
After conducting and transcribing the interviews, a thematic analysis, as a fundamental qualitative method, was applied to process the data. The conceptual framework of the analysis for the interviews was mainly built upon the theoretical positions of Braun and Clarke [35]. They define thematic analysis as a method for “identifying, analyzing, and reporting patterns (themes) within the data” [35] (p. 79) with the aim of generating an insightful analysis that addresses particular research questions.
Following the six-step process outlined by Braun and Clarke [35], the process was carried out as follows: In phase 1, the entire dataset was thoroughly reviewed to familiarize the researchers with the content. In phase 2, participants’ responses were examined in detail to identify initial codes, and a coding scheme was created. In phase 3, these codes were grouped into meaningful categories, forming the basis of thematic categories. In phase 4, a thematic map was generated [using Canvas] to guide further analysis. In phase 5, the specific themes were refined, and finally, in phase 6, the most relevant and compelling extract examples from the interviews were selected, with each quotation being placed under the theme that it best fit, and the report of the analysis was produced.
To ensure reliability, the researchers initially developed and tested a coding scheme. In the final stage, in accordance with previous scholarly research [36], researchers worked together to analyze, review, and finalize the emerging themes and sub-themes.
The final aim was to identify common themes, patterns, and ideas that stood out in regard to the participants’ perceptions of, first, the degree of integration of AI tools in Greek journalism; second, the changes that AI will bring about in journalism; and third, the ethical challenges raised by the use of AI in journalism and, especially, by AI-generated images in media content. The themes that emerged are presented in the next section.

4. Findings

Research results reveal that both journalists and academics in our sample share similar views on the adoption of AI in Greek journalism, with participants generally agreeing that its integration is still in the early stages. Journalists tended to emphasize practical challenges, such as the lack of training and tools for AI integration into journalistic workflows, as well as concerns about its potential to replace core journalistic functions and threaten job security. In contrast, academics offered a more comprehensive perspective, focusing on systemic issues like the absence of regulatory frameworks and the broader ethical implications of AI in journalism. For example, while journalists were more concerned with the immediate impact of AI’s implementation on their day-to-day work processes, academics frequently discussed the risks of algorithmic bias and the societal consequences. Moreover, they often framed their discussion around the need to align the implementation of AI with humanity’s collective values and a clear vision for the future.
In the following sections, our findings are presented and contextualized with quotes from participants as appropriate.

4.1. Participants’ Backgrounds and Perspectives on the Role of Technology in Journalism

The great majority of the participating journalists hold university degrees in journalism and mass communications. Regarding their relationship with technology, most journalists describe themselves as fairly proficient and claim to use it daily in their journalistic work. At the same time, all participating academics stated that a significant part of their research focuses on AI and media, making them experts in the field.
Moreover, regarding the major changes in the journalism landscape in Greece over the past decade, participating journalists highlight the closure of major media outlets, which has significantly impacted the media market.
PJ4: The big picture is that after the crisis, after 2010–2012, there were momentous changes [in the Greek media]. That is, newspapers and correspondent offices shut down, TV channels were closed, ERT [public radio and television broadcaster] was closed and later reopened. […] I don’t know, maybe it’s hundreds of journalists who lost their jobs overnight. They had to quickly shift to very opportunistic jobs, change contracts. For us it was a big issue.
When discussing the role of technology in these changes, all participants agree that many of the most significant shifts in journalism have occurred primarily due to technological advancements and the advent of Web 2.0. These changes include (i) a transition from traditional media outlets to online platforms, (ii) the rise of social media, (iii) a need for a constant flow of information, (iv) time pressure, (v) two-way communication between journalists and the public, (vi) the fragmentation of audiences, and (vii) the transformation of journalists into “multi-tools”, referring to the evolving role of journalists who are required to possess a diverse set of skills (e.g., writing, reporting, managing social media, providing multimedia productions) to adapt to the changing media landscape.
PA7: [Technological developments] changed it [journalism] completely. That is, in these last 15 years there is a complete, complete change. Radio has now gone to podcasts. Television has gone to on-demand video. The newspaper is not read. And everybody is on social media and the internet. I think if this happens in 2024, I guess in 2030, I don’t know if we’ll be talking about paper anymore. I don’t know what it’s going to be like.
PA5: [Today a journalist] must be capable of doing everything, doing content management, being content creator at the same time, must have excellent skills in order to manage, let’s say video, images, etc. Basically, what we call a multi-tool.
Overall, participants highlighted the substantial transformations in the Greek journalism landscape over the past decade, including the closure of major media outlets due to the financial crisis, which resulted in widespread job losses and forced journalists to quickly adjust to new, often unstable roles. Additionally, they emphasized the profound impact of technological advancements on the industry, reshaping journalism and requiring professionals to remain adaptable to meet the challenges of a rapidly evolving media landscape.

4.2. The Current Use of AI in Greek Journalism Is Still in Its Early Stages

The thematic analysis revealed that nearly all participants—except for 2 out of 28, who believe it has not penetrated at all—agree that AI is being applied in Greek journalism, though it remains in its early stages. This suggests that AI is currently in an experimental phase rather than fully integrated into the newsgathering and production processes.
The majority of journalists, as depicted in Figure 1, report a limited use of AI during the newsgathering and production phases. They primarily use AI tools for tasks such as machine translation, transcription, image editing or creation, background noise removal, and subtitle generation. A few of the journalists mentioned using language models like ChatGPT both as language assistants and for generating ideas. Investigative journalists, in particular, noted their use of data analysis applications to collect, sort, and analyze large volumes of documents. Notably, many journalists described their engagement with AI using verbs like “play”, “experiment”, and “try”, indicating an exploratory approach to its integration.
PJ1: As a journalist I use several tools that help me in my daily life including image editing, color correction in an image or image creation.
PJ4: For years, we’ve been saying that while so many people have gone to the moon, there is no reliable transcription system for journalism, for interviews. Now that this technology exists, I believe it has transformed our lives. I mean, you simply run the interview through the program and it’s transcribed automatically. That’s a gift from God.
PJ8: I often use ChatGPT which is for text. I don’t copy paste; it just often helps me in the wording of a phrase let’s say.
PJ9: For example, in our research, we extensively use programs which allow us to upload vast amounts of documents, such as PDFs. Much of investigative journalism relies on large databases that may be leaked. Analyzing these documents manually would require countless hours, and it’s likely that we would miss important details. By using AI, we can process everything efficiently, while these programs identify patterns and insights that might otherwise go unnoticed. So, there are such applications that help us a lot, particularly in the field of investigative journalism.
Interestingly, several participants initially reported that they do not use AI at all. However, when we asked clarifying questions, such as whether they use machine translation tools, it became clear that they actually use various tools but do not recognize them as AI, as indicated by the following extract:
PJ7: No, I don’t [use AI]. I don’t think it’s widely used in Greece either—not by anyone I know. […] Well, yes, I use DeepL, of course. There are also apps for transcription. I didn’t use them until recently. I had to do some lengthy transcriptions, and they served as a helpful guide. […] If we consider these as AI applications, then yes. Certainly [I use it] for translation and transcription. I can also think of some agencies where we search for images. They have an application that says AI search, so you can put in the keywords you want and if you click on that, it will come up with something that the AI thinks is closer to the topic you’re looking for. So, I suppose I do use AI.
In conclusion, while AI is still in its early stages of integration into Greek journalism, it is gradually becoming a valuable tool, particularly for supplementary tasks like translation, transcription, image editing, and data analysis, and is increasingly seen as important for improving investigative efforts and automating routine journalistic processes.

Barriers to the Further Use of AI in Greek Journalism

Participants identified several reasons for the limited use of AI in Greek journalism. As portrayed in Figure 2, they highlighted (a) a general delay in embracing or implementing new technologies within the Greek context due to cultural, economic, or structural factors that inhibit quick adaptation to emerging trends; (b) the fact that media owners in Greece lack a well-defined plan or strategy for incorporating AI into their operations and, as a result, AI tools are either less used or not integrated in a way that could maximize their potential benefits, such as automating tasks; (c) skepticism and fear on behalf of media professionals, often stemming from (d) inadequate knowledge and training in AI technologies; and (e) language constraints, as some AI tools are not well-suited for the Greek language due to limitations in natural language processing (NLP) models.
PJ6: We have begun using AI on a trial basis, but we currently lack a specific policy and framework within the organization regarding its implementation and guidelines. I believe this is crucial, especially for a journalistic organization. Just as we have policies for various other aspects, there should be provisions in place for the responsible use of AI.
PA6: I think there is [skepticism] on several fronts. One concern has to do with reliability—specifically, how much we can depend on it [AI]. This makes sense because even the paid version of ChatGPT includes a warning that ‘it may be lying’. I mean, it’s still a very black box, it’s very unclear how it works.
PJ9: In Greece, the limited use of AI tools is partly due to the lack of linguistic resources.
PA2: Most of ChatGPT’s training data is in English, with around 7–10% allocated to other languages, meaning English makes up around 93% of the total data, if I’m not mistaken.
In summary, and in accordance with previous research [11], which highlights the slow adoption of new technologies due to economic constraints and limited investment in innovation, the limited use of AI in Greek journalism can be attributed to cultural, economic, and structural barriers, as well as a lack of clear strategies from media owners and insufficient training for journalists. Additionally, as the research findings suggest, language limitations and skepticism about AI’s reliability further hinder its widespread adoption.

4.3. The Transformative Role of AI in Journalism

Regarding the changes participants expect AI to bring to journalism, the following findings emerge. Among the positive changes, as shown in Figure 3, most participants highlighted (a) the facilitation or automation of certain tasks, such as writing financial reports or sports results, which tend to have a more predictable structure; (b) saving time, which they believe can be redirected to more creative tasks; and (c) the ability to analyze large datasets more quickly and comprehensively, noting that AI can significantly reduce the time needed to process multi-page documents, thus minimizing the risk of overlooking important information.
PJ1: I see it [AI] as a tool. That is, we can turn human activity and the human factor into much more meaningful work, simplifying certain things in our lives.
PJ3: On the positive side, I believe AI can save time in many cases. […] And potentially, I think it would be very helpful for those working with data, helping to analyze and process information more quickly.
PJ9: I also know first-hand that major news agencies use artificial intelligence programs to generate much of their journalistic content, especially when it follows a specific structure. For instance, my first job was to write quarterly reports for the financial section of a newspaper.
Conversely, participants highlighted several negative impacts AI may have on journalism, including (a) potential risks to the quality and accuracy of content, with many expressing specific concerns about deepfakes; (b) a shift in the role and structure of journalistic work, with fears of job loss and the displacement of journalists from key aspects of their profession; and (c) the potential erosion of public confidence in the authenticity and credibility of news content generated by machines.
PJ2: I believe that if the public finds out that a journalist is using artificial intelligence, it could harm the trust between the public and journalists. And, on the other hand, maybe the journalist may start relying too heavily on AI, potentially putting less effort into their own work.
PJ3: So, asking ChatGPT to prepare the questions that I would ask an interviewee is not what I consider… I don’t think it has anything to offer me. Even if it does a good job. […]. It might save me some time, but it will take something essential away from the process. […] When you’re trying to think about and plan questions to ask an interviewee before an interview, you’re actually working out the topic you have. […] It’s one of the stages you have to go through to get to the to the final writing. By skipping that step, you miss out on an important part of the overall process, which ultimately makes the path smoother.
PJ7: I wouldn’t want to see AI take on the role of a journalist; I prefer it to serve as a tool that makes the job easier. While I don’t believe AI will completely replace journalists, I think it will evolve to a point where it could potentially do so, for better or worse.
Overall, as the research findings indicate, while AI is expected to cause positive changes in journalism, such as automating repetitive tasks, saving time, and enhancing data analysis, concerns about its impact on content quality, job displacement, and public trust remain significant. Journalists acknowledge the potential for AI to improve efficiency into the workflow, but they are cautious of its negative consequences, including the erosion of creative elements in journalism. While AI is seen as a valuable tool, many emphasize the need for it to complement, rather than replace, human judgment and expertise in the journalistic process.

4.4. Ethical Dilemmas and Challenges of AI in Journalism in Greece

Participants identified several ethical dilemmas and challenges associated with the use of AI in journalism, as shown in Figure 4, including (a) concerns about transparency and accountability for AI-generated content, particularly regarding who is responsible for the content and possible errors; (b) issues related to the protection of personal data; (c) copyright concerns; (d) challenges related to algorithmic bias; and (e) the boundaries between humans and AI, as evident in the following extracts:
PA7: There is definitely a risk of bias and it concerns me that we may rely too heavily on these tools. […] This technology is owned by organizations, using data that belongs to them, which means we don’t have control over it. While it can work well, it can also be a little bit dangerous. […] For example, when everyone relied on Facebook for information, a private company, we saw issues arise when their algorithms began to hide or over-promote certain articles. […] This highlights the risks of allowing a private platform to control the flow of information in society.
PJ2: For me this is the most crucial part: if you rely on AI to write or design your article, you risk falling into the trap of misinformation, which can also extend to visual content.
PJ9: I see that now even on the most basic things, that Paris is the capital of France, let’s say, there’s a checker, in the very big foreign media, they ask for human documentation. That is, every report that is written and every investigation that is written, it is asked by the checkers, by the editors and so on, that there is a human being who says that my source is this, that’s where you will find it or whatever.
PA11: Here, I would say that this is precisely why artificial intelligence presents an opportunity to reconsider what we consider human, what we deem important, and what we regard as a product of intellectual and creative endeavor, among other things.
In conclusion, as Greek journalists and academics emphasize, the use of AI in journalism presents several ethical dilemmas and challenges, including concerns over transparency, accountability for AI-generated content, data privacy, algorithmic bias, and copyright issues. Participants are particularly concerned about over-reliance on AI, the potential for misinformation, and the blurring of boundaries between humans and machines. Additionally, AI tools are seen as having the power to shape information flows in society, with risks similar to those posed by algorithm-driven platforms like social media. Ultimately, these ethical challenges require careful consideration to ensure that the integration of AI in journalism is guided by ethical considerations to maintain public trust, accountability, and journalistic integrity, all key pillars of journalism ethics’ theory.

4.5. The Use of AI-Generated Images in Journalism: Ethical Challenges and Considerations

The use of AI-generated images in journalism raises significant ethical challenges, with participants expressing skepticism about their inclusion in journalistic content. Their concerns primarily stem from the fact that these images do not depict reality but instead create an illusion of it. On the positive side, some argue that AI can approximate inaccessible spaces, providing readers with visual representations that might otherwise be unavailable.
PJ7: Since access was denied, the fact that there could be an approximation of what the space was like can be beneficial, as the image, for better or worse, still offers something valuable to the reader.
PJ8: The advantage is that, since there was no access to the space, the media was able to present, in a sense, the closest representation of what was happening there. That’s how I think about it.
PJ10: Since it’s stated that this is an AI construct, I see that as a positive thing. It effectively captures a narrative and can engage the reader’s attention more deeply. Regarding photography, we often find that many topics struggle to gain attention without accompanying images. Many times, the lack of a photo can lead to important topics being overlooked. In this context, AI plays a crucial role by facilitating the creation of content and gathering testimonials. I see it as a sort of ’Trojan horse’ for developing new narratives. The key question is whether it’s clear to the audience that this is driven by AI.
PA1: Obviously, there are advantages, in the sense that since you can’t have access, it’s good to somehow be able to represent something more vividly, because the use of audio-visual evidence in general is more vivid than text.
However, the concerns and disadvantages raised are significant. Many participants expressed that reliance on AI-generated images can lead to misinformation. Others highlighted that these images do not accurately represent reality and may confuse readers, particularly if they overlook the captions. Some suggested that sketches would be more suitable than AI-generated images, as the latter can mislead the audience.
PJ1: Now, we have an issue here. This is not a representation of reality. It’s AI-generated, meaning the algorithm used a testimonial provided by someone and formatted it into an image. I’m not sure if this approach is ethical. As an editorial director, I personally wouldn’t publish these images. I would include the testimonial, of course, but I couldn’t justify using those images, as they don’t represent the truth. They are fabricated materials generated by an algorithm, simply producing a visual outcome. While a picture is worth a thousand words, when the text contains a powerful and significant testimonial, an AI-generated image doesn’t add value. Instead, it may lead the reader to think suspiciously.
PJ3: Okay, I think I don’t agree with that use of the medium. I mean, since there are no photographs and it’s based on a narrative about a past event without additional data, the best approach would be to use a sketch. This would make it clear that someone is attempting to convey an account in a different way. Relying on AI-generated images can be confusing, especially if people don’t read the captions or if the images are shared online without context. There’s no point in creating something that appears realistic when it’s not, because it doesn’t represent reality.
PJ5: Artistically it could stand, journalistically I don’t believe it holds up. I mean, first and foremost, it’s an image that doesn’t represent reality. Regardless of how sophisticated AI may be—whether it creates a perfect likeness or not—I think it’s better to use a sketch to depict reality than to present a photo that merely looks real.
PJ6: Does it provide any real information? No. Journalism should aim to present facts related to reality, not serve as a form of activism.
At the same time, while AI-generated images can offer visual context, many participants express concern that they may detract from the core values of photojournalism, potentially undermining the hard work of journalists reporting from challenging environments. Academics, in contrast, seem more optimistic.
PJ7: There will certainly be, I imagine, the debate of photojournalists, where they have been in very difficult areas, in places of great difficulty, and so having the AI automatically create an image from an inaccessible place kind of… degrades their work. Because it’s also a job that’s always done in very difficult conditions.
PJ5: Well, it’s [AI] finishing it [photojournalism]. Sorry. But it’s totally over. And on an economic scale. When you can make this with AI in five minutes. It totally kills the photo.
PA2: Ι think there will always be a need to see what truly happened—a photograph that genuinely shows us what took place at a location or how an incident unfolded. I don’t believe this can ever be fully replaced by how an AI model might ‘imagine’ the final outcome based on our descriptions.
In summary, as participants noted, while AI-generated images offer certain benefits, such as providing visual representations of inaccessible spaces, they raise significant ethical concerns in journalism. These concerns primarily revolve around the risk of misinformation and the distortion of reality, as AI images do not accurately represent actual events. Many participants argue that these images can mislead audiences, particularly when captions are overlooked, and that alternative visual representations like sketches may be more appropriate. Moreover, the rise of AI-generated images threatens the core values of photojournalism and undermines the work of journalists reporting from challenging environments. These ethical challenges highlight the importance of transparency and maintaining journalistic integrity in the use of AI in visual content.

The Urgent Need for a Regulatory Framework for AI in Journalism

Overall, several participants asserted the necessity of transparency in using AI-generated images. They suggested measures such as clearly communicating the use of AI tools, applying watermarks to identify AI-generated images, and providing detailed prompt descriptions to enhance understanding and maintain ethical standards. Additionally, there is a consensus on the urgent need for a regulatory framework to guide journalists in the responsible use of AI in their work, ensuring that ethical considerations are prioritized.
PJ2: First and foremost, it’s essential to clearly indicate that the image is generated by artificial intelligence.
PJ8: There should possibly be a watermark or some indication on the image that it was created by AI.
PA3: In my personal opinion, it would be beneficial to also communicate the prompt—the exact words and their specific order—that were fed into the AI application to generate the given result.
PA4: I believe it would be very helpful, if not essential, to ensure transparency at all stages of the process—starting from who owns the databases, which is crucial, to identifying the end user who posts comments. This transparency could serve as an effective solution.
PA5: I believe there should be a regulatory framework along with established best practices. This is a challenge for our industry as a whole: we need to define and create a framework that addresses specific ethical issues and regulates them effectively. So, it’s a gap that we in the industry must take action to fill; we cannot afford to ignore it.
In conclusion, as the research findings indicate, the establishment of a regulatory framework for the use of AI in journalism is urgently needed. Participants emphasized the importance of transparency, recommending measures such as the clear identification of AI-generated images, the use of watermarks, and the provision of detailed prompts to enhance audience understanding. Creating such a framework could guide journalists in using AI responsibly. It is clear that the industry must take proactive steps to regulate AI usage to maintain trust and integrity in journalism.

5. Discussion

This study investigated the perceptions and experiences of journalists and academics in Greece concerning the integration of AI into journalism, with a particular emphasis on the ethical challenges it presents. Grounded in a journalism ethics theoretical and conceptual framework, the research utilized 28 semi-structured interviews and thematic analysis to identify key themes that shed light on the evolving role of AI in Greek journalism and the complexities it introduces to the profession. By focusing on Greece’s unique context, characterized by limited resources and language constraints, the study underscores both the opportunities and challenges inherent across less advantaged media landscapes.
The research results reveal a shared view among participants that, while AI is increasingly being integrated into journalism, its application remains largely experimental. Most journalists and academics view AI as a tool that can assist professionals in certain aspects of their work, particularly in routine tasks such as transcription, translation, data analysis, etc. Despite this optimism, they remain cautious about AI’s potential to erode core journalistic values, such as transparency and accuracy, and to enhance bias, all fundamental pillars of the journalism ethics’ framework [32].
These results align with findings by Diakopoulos [5] and Cools and Diakopoulos [28], who conducted semi-structured interviews with journalists in the Netherlands and Denmark. While these journalists acknowledge the benefits of such tools, including enhanced efficiency and improved data management, they also express concerns about the potential harm to journalism’s accuracy and credibility, as well as ethical issues like algorithmic bias. De-Lima-Santos and Ceron [37] share this concern, stressing that although AI has enormous promise for efficiency and flexibility, it should not come at the price of the fundamental principles and ethics that guide the journalism profession.
The use of AI-generated images in journalism sparked particular debate among participants. While some acknowledged that these images might provide valuable information about inaccessible locations, the prevailing sentiment was that of skepticism. Participants expressed concern that AI-generated visuals do not accurately depict reality and could mislead the audience. Many suggested that sketches might be more appropriate in conveying such narratives, emphasizing the need for accuracy and integrity in visual journalism. Participants highlighted the importance of ensuring that audiences are aware when content is AI-generated to avoid misleading representations. Many advocated for the use of clear indicators, such as watermarks, to inform viewers about the nature of the images presented.
Overall, participants stressed the need for a regulatory framework and for established guidelines that govern the ethical use of AI in journalism, advocating for collective action within the industry to address these issues. The lack of regulation could make it harder to use AI technologies responsibly, making existing ethical problems worse.
Indeed, journalistic ethical guidelines in Greece must be updated by journalists’ unions to establish clear standards, which would enhance the understanding and responsible use of AI in newsrooms. As highlighted by other scholars [28,38], AI’s presence often goes unnoticed by journalists. Establishing clear guidelines can serve as a starting point for fostering a more constructive discussion. AI literacy training is essential for all professionals within the news ecosystem, from editors and reporters to content creators. Additionally, media organizations should be consulted on how to strategically integrate AI into their planning.
The broader issue, however, has to do with how the relationship between humans and machines will be defined and what role journalists will play in this new landscape. What standards have we set as humanity, based on our shared value system? One participant noted the following:
PA15: Technology and the use of AI and generative AI can, of course, provide significant assistance and enhance human capabilities in many areas, including journalism, by freeing up considerable time for journalists to focus on other tasks. However, the question of what role humans and professionals should play in the field of journalism in the age of AI is, in my view, the most critical issue. […] this raises the question of where we are headed and how we should define the role of humans in this complex ecosystem. This represents the greatest ethical dilemma today, not only in journalism but also in other sectors. We have not yet resolved it.
Updating ethical and deontological standards to reflect current technological developments is crucial—not only for journalism but for all sectors. These standards, however, must reflect our collective values and visions for the future. PA15 insightfully stated the following:
PA15: In this context, the concept of AI literacy, and how we understand the implications of creating technology that meets the standards we as humanity have set based on our value system—beyond the legal framework—is essential. It reflects what we stand for and how we envision the future for ourselves and our children in this new reality. This understanding should, to a large extent, guide and strengthen the ethical and deontological rules we follow as institutions, organizations, journalists, and beyond.
As Helberger et al. [26] point out, technology does not dominate us; instead, it is our responsibility as a society—consisting of professionals, academics, and developers—to shape technologies in a way that reflects the society we want to build. To make this possible, the development and implementation of digital solutions must be driven by a clear vision of the values and fundamental freedoms we wish to uphold. This vision is just as important when it comes to the use of artificial intelligence (AI) in journalism. One academic noted the following:
PA2: I believe that, as has often been noted in the literature, the term ’meta-journalist’ has emerged, which, to some extent, reflects a new role in the field. We should aim to foster a symbiotic relationship between these technologies and traditional journalism practices. First, because we have no alternative—these technologies are here to stay—and second, because such a relationship can lead to better outcomes. Over time, journalists will likely need to rethink the core skills required for their profession. This doesn’t mean they will function purely as curators; rather, they can harness artificial intelligence and other technologies to free themselves from some of the more mundane tasks of journalism.
The question of how much of ourselves—our creativity, critical thinking, and decision-making—we are willing to delegate to AI, especially in the pursuit of saving time, is a profound one. As we increasingly rely on AI tools to streamline tasks and enhance efficiency, there is a growing risk of outsourcing elements of our thinking processes that have traditionally been deeply human. From using AI for content creation to decision-making and problem-solving, these tools can create a dependency that may gradually erode our ability to think critically, problem solve, and innovate independently.
Hence, the real challenge lies in finding a balance: as AI becomes more integrated into our daily lives, we need to reflect on what it means to remain fully human in an increasingly automated world, ensuring that technology serves as an enhancement, not a replacement, for the deeper, more nuanced aspects of our cognitive and creative selves. In such a regulated, conscious, and symbiotic framework, perhaps HAL 9000 could open the door for Dave, and Dave would not need to shut his iconic assistant down.

6. Conclusions

In this study, we explored the extent to which AI tools are used by journalists in Greece, examining the benefits, potential risks, and ethical challenges they present. While recognizing AI’s groundbreaking potential, the findings reveal that its application in journalism remains largely experimental, particularly in countries like Greece, where economic and linguistic constraints, limited resources, and inadequate training infrastructure present significant barriers. By examining this smaller media ecosystem, this study enriches the global discourse on AI in journalism, offering valuable insights into the opportunities and challenges faced in resource-limited contexts.
Positioning Greek journalism within the broader global AI debate highlights the improvisational approach to AI integration compared to that in better-resourced media markets [9]. Nevertheless, it provides valuable insights into the potential risks of adopting AI without adequate preparation or oversight. Participants underscored ethical challenges associated with AI-generated content, particularly the risks of misinformation and bias. The findings also stress the importance of fostering dialogue among stakeholders to address the complexities of AI in media, ensuring that journalistic integrity, accuracy, and ethical standards remain at the core of integrating emerging technologies.
This underscores the pressing need for localized regulatory frameworks that not only align with international best practices but also address region-specific challenges. To bridge the gaps identified in this study, journalists’ unions, policymakers, academics, and media organizations should prioritize actionable measures, such as implementing AI literacy programs and establishing ethical guidelines.
While this study offers valuable insights into journalists’ perceptions and applications of generative AI, it is essential to acknowledge its limitations. First, the reliance on a limited number of interviews means that the research provides only a partial view of the broader journalistic landscape in relation to generative AI tools. The timing of the interviews, conducted between May and September 2024, is also a crucial factor, as it represents a specific period of generative AI tool usage. Future research could build on this study by conducting comparative analyses across similarly constrained media systems, allowing for a more nuanced understanding of AI’s impact in varied contexts. Furthermore, reflecting on the study’s limitations, such as the small sample size and the qualitative approach, offers a valuable direction for future research, suggesting that broader, mixed-method studies could provide more comprehensive insights.
In this transformative era, balancing technological innovation with ethical responsibility is a real challenge for journalism worldwide. While algorithms may increasingly drive content creation and decision-making, the human touch—embodied in the insight, judgment, and critical thinking of journalists—must remain at the core of the profession.

Author Contributions

Conceptualization, P.K. and C.A.; methodology, P.K. and C.A.; validation, P.K. and C.A.; formal analysis, P.K. and C.A.; investigation, P.K. and C.A.; resources, P.K. and C.A.; data curation, P.K. and C.A.; writing—original draft preparation, P.K.; writing—review and editing, P.K.; visualization, C.A.; supervision, P.K.; project administration, P.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Ethics Committee of Aristotle University of Thessaloniki (Protocol Number: 211411//2024).

Informed Consent Statement

Written informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data presented in this study are available on request from the corresponding author (the data are not publicly available due to privacy or ethical restrictions).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. List of Participants

NumberCodeGenderMedia/PlatformDate
1PJ-1ManOnline News Outlet31 May 2024
2PJ-2WomanFreelance2 June 2024
3PJ-3ManNewspaper—Television—Online News Outlet3 June 2024
4PJ-4ManFreelance—Online News Outlet4 June 2024
5PJ-5ManFreelance5 June 2024
6PJ-6WomanOnline News Outlet6 June 2024
7PJ-7WomanOnline News Outlet7 June 2024
8PJ-8WomanOnline News Outlet10 June 2024
9PJ-9WomanFreelance11 June 2024
10PJ-10WomanNewspaper—Television—Online News Outlet12 June 2024
11PJ-11WomanTelevision12 June 2024
12PJ-12ManOnline News Outlet13 June 2024
13PA -1WomanUniversity of Western Macedonia14 June 2024
14PA-2ManAristotle University of Thessaloniki17 June 2024
15PA-3WomanIonian University18 June 2024
16PA-4WomanAristotle University of Thessaloniki19 June 2024
17PA-5ManIonian University20 June 2024
18PA-6ManAristotle University of Thessaloniki21 June 2024
19PA-7ManUniversity of Western Macedonia24 June 2024
20PA-8ManAristotle University of Thessaloniki25 June 2024
21PA-9ManInternational Hellenic University13 September 2024
22PA-10ManAristotle University of Thessaloniki16 September 2024
23PA-11ManNational Centre for Scientific Research Demokritos18 September 2024
24PA-12ManAristotle University of Thessaloniki24 September 2024
25PA-13ManAristotle University of Thessaloniki27 September 2024
26PA-14WomanNational Centre for Scientific Research Demokritos30 September 2024
27PA-15WomanNational Centre for Scientific Research Demokritos30 September 2024
28PA-16ManAristotle University of Thessaloniki30 September 2024

Appendix B. Interview Guide Comprised of Six Axes

The interview guide consisted of six phases.

Phase 1: Journalists’/academics’ careers
- Can you tell me about your career trajectory as a journalist/researcher, from the beginning to your current position?
- Can you describe your current daily work routine in the newsroom or in the field/your primary research interests?

Phase 2: Journalists’ and academics’ views about significant changes in journalism over the last decade and the effect of technologies on it
- What are the biggest changes in Greek journalism over the past decade, according to your experience?

Phase 3: Journalists’ and academics’ ideas about the implementation of AI in Greek journalism
- Do you use any AI application(s) in your journalistic work? If yes, which one(s)? (journalists)
- What kinds of tasks have you used AI for? (e.g., newsgathering, news production, or news distribution) (journalists)
- From your research, how would you assess the extent to which AI has penetrated journalism? (academics)
- To what extent do you think AI has been integrated into Greek journalism? (journalists)
- If AI has been adopted to a limited extent or not at all, what do you think are the reasons for its underutilization? (both)
- Do you believe journalists generally have the necessary knowledge and training to effectively use AI in their work? (both)

Phase 4: Exploration of participants’ perceptions of the changes that AI brings about in the journalistic environment
- In general, what changes do you think AI is bringing about in journalism?
▪ What kinds of positive changes?
▪ What kinds of negative changes?

Phase 5: Ethical dilemmas and challenges in the application of AI in journalism
- What kinds of ethical issues are raised by the use of AI in journalism?

Phase 6: Exploration of journalists’ and academics’ perceptions of the use of AI-generated news content and images
-We would now like to show you two images created with the assistance of AI. These images are based on the testimonies of 32 refugees and migrants detained in offshore facilities on Nauru and Manus Island in Australia. They visually depict the inhumane living conditions within these facilities, to which journalists and photojournalists were denied access. Drawing from the accounts of women and men held in these offshore centers, and with the support of AI, refugee and migrant advocates developed these images. Subsequently, these images were used by organizations seeking to bring attention to this issue.
- What are the potential advantages of using such AI-generated images in news content?
- What are the possible disadvantages of using such images, in your view?
- How do you think journalists should ethically and deontologically approach the use of such images?

References

  1. Perc, M.; Ozer, M.; Hojnik, J. Social and juristic challenges of artificial intelligence. Palgrave Commun. 2019, 5. [Google Scholar] [CrossRef]
  2. Newman, N. Journalism, Media and Technology: Trends and Predictions for 2020. Reuters Institute for the Study of Journalism & Oxford University, 9 January 2020. Available online: https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions-2020 (accessed on 14 November 2024).
  3. Gomez-Diago, G. Perspectives to address artificial intelligence in journalism teaching. A review of research and teaching experiences. Rev. Lat. Comun. Soc. 2022, 80, 29–46. [Google Scholar] [CrossRef]
  4. Noain-Sánchez, A. Addressing the Impact of Artificial Intelligence on Journalism: The perception of experts, journalists and academics. Commun. Soc. 2022, 35, 105–121. [Google Scholar] [CrossRef]
  5. Diakopoulos, N. Automating the News: How Algorithms are Rewriting the Media; Harvard University Press: Cambridge, MA, USA; London, UK, 2019. [Google Scholar]
  6. Bentivegna, S.; Marchetti, R. Journalists at a crossroads: Are traditional norms and practices challenged by Twitter? Journalism 2018, 19, 270–290. [Google Scholar] [CrossRef]
  7. Broussard, M.; Diakopoulos, N.; Guzman, A.; Abebe, R.; Dupagne, M.; Chuan, C.-H. Artificial Intelligence and Journalism. J. Mass Commun. Q. 2019, 96, 673–695. [Google Scholar] [CrossRef]
  8. Ioscote, F.; Gonçalves, A.; Quadros, C. Artificial Intelligence in Journalism: A Ten-Year Retrospective of Scientific Articles (2014–2023). J. Media 2024, 5, 873–891. [Google Scholar] [CrossRef]
  9. Parratt-Fernández, S.; Mayoral-Sánchez, J.; Mera-Fernández, M. The application of artificial intelligence to journalism: An analysis of academic production. Prof. Inf. 2021, 30, e300317. [Google Scholar] [CrossRef]
  10. Papathanassopoulos, S. The Greek Media at the Intersection of the Financial Crisis and the Digital Disruption. In The Emerald Handbook of Digital Media in Greece (Digital Activism and Society: Politics, Economy and Culture in Network Communication); Veneti, A., Karatzogianni, A., Eds.; Emerald Publishing Limited: Leeds, UK, 2020; pp. 131–142. [Google Scholar] [CrossRef]
  11. Spyridou, L.-P.; Matsiola, M.; Veglis, A.; Kalliris, G.; Dimoulas, C. Journalism in a state of flux: Journalists as agents of technology innovation and emerging news practices. Int. Commun. Gaz. 2013, 75, 76–98. [Google Scholar] [CrossRef]
  12. McLuhan, M. The Gutenberg Galaxy: The Making of Typographic Man; University of Toronto Press: Toronto, ON, Canada, 1962. [Google Scholar]
  13. Pavlik, J. The Impact of Technology on Journalism. J. Stud. 2000, 1, 229–237. [Google Scholar] [CrossRef]
  14. The Changing Newsroom: What is Being Gained and What is Being Lost in America’s Daily Newspapers? Pew Research Center, 21 July 2008. Available online: https://www.pewresearch.org/journalism/2008/07/21/the-changing-newsroom-2/ (accessed on 14 November 2024).
  15. Őrnebring, H. Newsworkers: A Comparative European Perspective; Bloomsbury Publishing Inc.: New York, NY, USA, 2017. [Google Scholar]
  16. Linden, C.-G. Decades of Automation in the Newsroom: Why are there still so many jobs in journalism? Digit. J. 2017, 5, 123–140. [Google Scholar] [CrossRef]
  17. Amponsah, P.N.; Atianashie, A.M. Navigating the New Frontier: A Comprehensive Review of AI in Journalism. Adv. J. Commun. 2024, 12, 1–17. [Google Scholar] [CrossRef]
  18. Beckett, C. New Powers, New Responsibilities: A Global Survey of Journalism and Artificial Intelligence; London School of Economics: London, UK, 2019; Available online: https://blogs.lse.ac.uk/polis/2019/11/18/new-powers-new-responsibilities (accessed on 14 November 2024).
  19. Casswell, D.; Dörr, K. Automated Journalism 2.0: Event-driven narratives: From simple descriptions to real stories. J. Pract. 2018, 12, 477–496. [Google Scholar] [CrossRef]
  20. Kotenidis, E.; Veglis, A. Algorithmic Journalism—Current Applications and Future Perspectives. J. Media 2021, 2, 244–257. [Google Scholar] [CrossRef]
  21. van Dalen, A. Revisiting the Algorithms Behind the Headlines. How Journalists Respond to Professional Competition of Generative AI. J. Pract. 2012, 6, 648–658. [Google Scholar] [CrossRef]
  22. Greenslade, R. AP replaces reporters with automated system to produce company results. The Guardian, 2 July 2014. Available online: https://www.theguardian.com/media/greenslade/2014/jul/02/associated-press-digital-media (accessed on 14 November 2024).
  23. Sun, M.; Hu, W.; Wu, Y. Public Perceptions and Attitudes Towards the Application of Artificial Intelligence in Journalism: From a China-based Survey. J. Pract. 2022, 18, 548–570. [Google Scholar] [CrossRef]
  24. Clerwall, C. Enter the Robot Journalist: Users’ perceptions of automated content. J. Pract. 2014, 8, 519–531. [Google Scholar] [CrossRef]
  25. Meijer, I.C.; Kormelink, T.G. Changing News Use Unchanged News Experiences? Routledge: London, UK, 2020. [Google Scholar]
  26. Helberger, N.; van Drunen, M.; Moeller, J.; Vrijenhoek, S.; Eskens, S. Towards a Normative Perspective on Journalistic AI: Embracing the Messy Reality of Normative Ideals. Digit. J. 2022, 10, 1605–1626. [Google Scholar] [CrossRef]
  27. Pariser, E. The Filter Bubble: What the Internet Is Hiding From You; Penguin Press: New York, NY, USA, 2011. [Google Scholar]
  28. Cools, H.; Diakopoulos, N. Uses of Generative AI in the Newsroom: Mapping Journalists’ Perceptions of Perils and Possibilities. J. Pract. 2024, 1–19. [Google Scholar] [CrossRef]
  29. Ward, S.J. Journalism Ethics. In The Handbook of Journalism Studies; Wahl-Jorgensen, K., Hanitzsch, T., Eds.; Routledge: London, UK, 2019; pp. 307–323. [Google Scholar]
  30. Paik, S. Journalism Ethics for the Algorithmic Era. Digit. J. 2023, 1–27. [Google Scholar] [CrossRef]
  31. Noble, S.U. Algorithms of Oppression: How Search Engines Reinforce Racism; New York University Press: New York, NY, USA, 2018. [Google Scholar]
  32. Deuze, M. What Is Journalism? Professional Identity and Ideology of Journalists Reconsidered. Journalism 2005, 6, 442–464. [Google Scholar] [CrossRef]
  33. Fanta, A. Putting Europe’s Robots on the Map: Automated Journalism in News Agencies. University of Oxford: Reuters Institute for the Study of Journalism: Oxford, UK, 2017. Available online: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2017-09/Fanta,%20Putting%20Europe%E2%80%99s%20Robots%20on%20the%20Map.pdf (accessed on 8 January 2025).
  34. Sandoval-Martín, M.T.; La-Rosa, L. Big Data as a differentiating sociocultural element of data journalism: The perception of data journalists and experts. Commun. Soc. 2018, 31, 193–208. [Google Scholar] [CrossRef]
  35. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  36. Papadopoulou, L.; Maniou, T.A. SLAPPed” and censored? Legal threats and challenges to press freedom and investigative reporting. Journalism 2024, 14648849241242181. [Google Scholar] [CrossRef]
  37. de-Lima-Santos, M.F.; Ceron, W. Artificial Intelligence in News Media: Current Perceptions and Future Outlook. J. Media 2022, 3, 13–26. [Google Scholar] [CrossRef]
  38. de Haan, Y.; van den Berg, E.; Goutier, N.; Kruikemeier, S.; Lecheler, S. Invisible Friend or Foe? How Journalists Use and Perceive Algorithmic-Driven Tools in Their Research Process. Digit. J. 2022, 10, 1775–1793. [Google Scholar] [CrossRef]
Figure 1. Current uses of AI in Greek journalism, according to participants.
Figure 1. Current uses of AI in Greek journalism, according to participants.
Societies 15 00022 g001
Figure 2. Barriers to further AI use, according to participants.
Figure 2. Barriers to further AI use, according to participants.
Societies 15 00022 g002
Figure 3. Changes AI may bring to journalism, according to participants.
Figure 3. Changes AI may bring to journalism, according to participants.
Societies 15 00022 g003
Figure 4. AI and ethics.
Figure 4. AI and ethics.
Societies 15 00022 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kalfeli, P.; Angeli, C. The Intersection of AI, Ethics, and Journalism: Greek Journalists’ and Academics’ Perspectives. Societies 2025, 15, 22. https://doi.org/10.3390/soc15020022

AMA Style

Kalfeli P, Angeli C. The Intersection of AI, Ethics, and Journalism: Greek Journalists’ and Academics’ Perspectives. Societies. 2025; 15(2):22. https://doi.org/10.3390/soc15020022

Chicago/Turabian Style

Kalfeli, Panagiota (Naya), and Christina Angeli. 2025. "The Intersection of AI, Ethics, and Journalism: Greek Journalists’ and Academics’ Perspectives" Societies 15, no. 2: 22. https://doi.org/10.3390/soc15020022

APA Style

Kalfeli, P., & Angeli, C. (2025). The Intersection of AI, Ethics, and Journalism: Greek Journalists’ and Academics’ Perspectives. Societies, 15(2), 22. https://doi.org/10.3390/soc15020022

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop