Beyond Interface: AI News Anchors as a Form of Brand-New Media Presentation—A Chinese Case based on the ITO Model
Résumé
Les présentateurs de nouvelles IA sont devenus une interface particulière de présentation de nouvelles et tout nouveau média entre les producteurs de contenu et les spectateurs. En fonction des étapes de production de nouvelles ITO et du modèle éthique, cette étude a analysé les avantages technologiques, les limites éventuelles ainsi que les tendances futures des présentateurs de nouvelles IA, qui deviendront un média intelligent original prometteur. Elle promouvra aussi le développement de la société informatique.
IA, présentateur du JT, mode ITO, interface, production de nouvelles, éthique des nouvelles
Abstract
AI (artificial intelligence) news anchors have become a unique news presentation interface and a brand-new media form between content producers and audiences. Based on the ITO news production stages and ethics model, this study analyzed the technical advantages, potential limitations, and future trends of AI news anchors, which will promisingly become an original intelligent medium and promote the development of information society.
Table of content
Full text
This paper was supported by the National Social Science Fund of China (Grant No. 17CXW039) and the Fundamental Research Funds for the Central Universities (2018NTSS63).
Introduction
1Media is somewhat equivalent to real life, in the sense that—individuals might perceive digital prototypes as real people and extend a similar perception or trust to them as long as they are sufficiently intelligent and humanoid (Reeves & Nass, 1996; Nicholas et al., 2008). News formations and ethics have gradually become subject to ever-greater technological impact (Parry, 2011). One of the areas that is most profound affected by technological progress is artificial intelligence (AI), which refers to the study of how to make computers do intelligent work that only people could do in the past (Brown, 1984; Lemley et al., 2017; Smith & Eckroth, 2017). Currently, the primary question we encounter is this: what will happen when artificial intelligence technology began to invade news production in the public sphere?
2News anchors have long held a unique a unique role in the journalism industry; their faces, gestures, voice, and tone, which have been deeply rooted in the hearts of their audiences, are the most distinctive and even irreplaceable elements of a particular live show. However, it seems that there is nothing that cannot be automated (Van Dalen, 2012), and news anchors are no exception. The development of technology has allowed all aspects of an anchor’s image to be precisely duplicated. For instance, on 7th November 2018, Xinhua News Agency and Sogou Company released the world’s first synthetic news anchor, named AI Synthetic Anchor (Baraniuk, 2018), using the latest artificial intelligence technology to extract features—voice, lip movements, and facial expressions—from real anchors, as well as deep learning technology, to automatically generate videos that incorporate these features in a natural, consistent manner.
3Even before Xinhua News Agency made its impressive attempt, the prospect of synthetic anchors had already generated attention in the global media industry. For example, the UK Press Association launched the first 2D-compositing anchor named Ananova (BBC, 2000); then, the United States and South Korea together launched a CG anchor image. From 2D to 3D, from stiff movements to rich expressions, and from unconvincingly reproduced images to full-fledged, expressively rendered, this technological innovation not only represented a breakthrough in the field of global AI synthesis but also pioneered the integration of real-time audio and video in the communication field.
AI News Anchors: Trends, Advantages, and Drawbacks
4The visualized AI news anchor presents several apparent advantages in practice. First, it conforms to the audiences’ expectations in the context of livestreaming short video. Second, it enables producers to escape the temporal and spatial restrictions of traditional news production, providing a more sustainable and immersive experience to the viewer. What is more, AI is comparatively error-controllable, as it is a technical product entirely programmed by computer code and instructions (Zhao, 2019).
5However, the limitations of the synthetic AI news anchor are also quite apparent. From Ananova to the Xinhua News Agency AI news anchor, although the quality of their news output has improved, the main types of news they can present are still limited to sports, finance, weather forecasts, and short news; even in these formats, their performance falls far short of the standards one would expect of “smart anchors.” One case which illustrates this point was a notable mistake of reading “Jack Ma,” referring to Alibaba founder Ma Yun, as “Jack Massachusetts,” made by one AI anchor (Li, 2018). In cases wherein strict standards of interpretation are set, such errors might still occur. In spite of its limitations, the current AI news anchor technology is virtually equivalent to a new interface or tube of the voice for news organizations.
6Hence, it is reasonable to speculate on the future trends of AI news anchors. First, visual and audio synchronization will likely be enhanced significantly. To simulate real people as accurately as possible, AI news anchors would need to improve in areas such as micro-emotions and contextual expressions. The second goal is to improve AI news interactivity. Chen Wei, who is in charge of the AI anchor technology at Xinhua News Agency, asserts that with further development of modeling and semantic recognition technology, AI news anchors will evolve to become news communicators with interactive informational and emotional capability, thus providing audiences with a more immersive experience.
Crisis and Challenges: New Technologies, Old Problems
7Although AI technology is reshaping the journalism industry, the ultimate ethical issue is determining how to follow the principles of journalistic professionalism and maintain the objectivity, authenticity, and accuracy of the news. The discussion of AI anchor-related ethics is subject to a particular set of concerns, where cyber-ethics, news ethics, and digital media ethics are interwoven (Tavani, 2011); these issues warrant consideration from the perspectives of news transparency, social accountability, human values, and privacy concerns (Dörr & Hollnbuchner, 2017).
8This study discusses the ethical issues affecting the use of AI news anchors, referring to the ITO framework (as presented in Figure 1) as its basis. The ITO framework divides AI news production into three processes, Input-Throughput-Output, and then discusses the ethics of AI news anchors from organizational, professional, and social perspectives (Dörr, 2016; Latzer, et al., 2016; Reiter & Dale, 2000).
Figure 1: IOT (Input-Throughput-Output) model on AI anchors analysis
Input: human will, news source and data legitimacy
9Journalists possess a certain measure of their autonomy and authority in presenting news and depicting the outside world (Schudson, 1989). The will is embodied in topic selection and narrative angle in the traditional media, while in the production process of AI anchor news, it is transformed into rules of content input and processing operation.
10Nevertheless, the rules governing the machine are still determined by humans behind it; despite this operation might be a consensus or “general opinion,” which could possibly lead to introduction of bias in the news bias.
11The lack of investigation on the authenticity of news resources may also allow synthetic AI anchors to potentially disseminate fake news. Veracity is one of the most fundamental ethical rules of presenting news; however, in AI news presentation and news production, false information can be inadvertently incorporated into reports during the automatic gathering of misinformation and disinformation, especially when there is not sufficient human brainpower available to support appropriate fact-checking. Furthermore, only when the process of data collection conforms to laws and regulations and does not violate the privacy rights of individuals or groups can news reporting be considered legitimate. Real reporters will consider these issues and conform to the basic moral rules and regulations, but the behavior of AI anchors must be refined into operable regulations.
Throughput: relative objectivity, reliability, and responsibility
12Objectivity has been re-examined from the perspective of fairness and justice (Deuze, 2005) and described as journalists’ idealistic paradox (Maras, 2013; Deuze, 2005), and remains the most essential norm for journalistic professionalism. The news presentations and even values of AI news anchors are programmed and stylized; more importantly, they lack journalists’ internal understanding of norms and well-trained professional skills, which can only lead to producing relative or “crafted objectivity” (Gillespie, 2014).
13Moreover, even the faces of the news anchors imply power and reliability. If improperly manipulated, this can lead to the displacement in the control of public opinion. A case in point was a video clip which purported to be public service announcement by former US president Obama apparently making offensive comments. The video, which was clicked by 6.2 million viewers on Buzzfeed, was later proved to be a “deep fake” video created in merely 56 hours using the AI software FakeApp. This indicates that media reliability also involves the issue of liability.
14Media accountability involves the norms which journalists and anchors are expected to adhere, and the standards they are responsible for upholding (Glasser, 1989). In the traditional news industry, in each process, journalists have a strict auditing system for fact verification, writing style, report balance and so on; meanwhile, editors, journalists, and anchors are each accountable for their own responsibilities. The performance of the news anchor is actually a comprehensive reflection of all the processes behind it. As for AI anchors, a considerable part of the relevant staff are technicians, not professional journalists, are therefore interwoven and balanced with news media credibility, while the public can only see the seemingly familiar faces of AI anchors on the interface.
Output: empathy, attachment and portrait abuse
15The anchor as news presentation is a special subject in the process of news production, which has a great influence on the audiences’ perceptions (Schudson, 1989). Despite the fact that AI synthetic anchor is a highly anthropomorphic form of technology, which can completely model the tone, expression, posture, voice pitch and speed and other elements, and previous studies have found that the degree of anthropomorphism does affect the construction of trust in human-computer interaction (Davidson & Laroche, 2016; Waytz, Heafner & Epley, 2014). However, what AI news anchors lack is the real empathy, which is far from meeting the audiences’ humanistic demands.
16Another aspect worth pondering is that machine deep learning and algorithmic systems have allowed artificial intelligence to display the characteristics of a quasi-personality. In particular, AI researchers may develop autonomous agent achievements or “strong artificial intelligence” in the future. In that case, AI agents with quasi-personalities can easily encourage people to become attached. For example, when some robots are applied to taking care of children and the elderly, even with simple interactions such as cuddling and feeding, it may cause human beings to pour their feelings and dependence on machines in the long term. Such is the case with AI news anchors. If people project attachments onto AI anchors the way fans relate to their human idols, the relationship between them becomes something more complex than a mere man-machine relationship, developing instead into a potentially risky infatuation.
17If the image synthesis technology were abused, and the AI anchor image was matched with inappropriate comments and infectious expressions as discussed in the throughput process, the portrait rights and privacy rights of celebrities, famous anchors and the public would become vulnerable to violation and exploitation.
Facing change: clarifying principle and operation
18The emergence of AI anchors as a brand-new media in the news production process has had a major impact on the guidelines originally followed by the media and has raised a number of ethical concerns via three process of input, throughput, and output. As far as the current technological development is concerned, synthesis AI anchors are still in the stage of “weak artificial intelligence”, and it is apparently that they still have a long way to go to achieve self-awareness and social value judgment. Nevertheless, the consideration of ethical issues should be further advanced, on the premise that technical logic serves the development of media industry, to clarify anchor’s instrumental value.
19The media has social accountability; whether they rely on human anchors or on AI technology, the news industry should conform to norms such as accuracy, objectivity, and anthropocentric values. On this basis, the news industry and technical practitioners should strive for the balance between the right to know and the right to privacy, between public services and business achievements, and between technical specifications and journalistic professionalism to further minimize the impact of the ethical issues caused by AI news anchors.
20Moreover, the news production process of AI synthetic anchors should have precisely regulated operations based on the principles of news professionalism (Merrill, 2011). These standards should apply in data input, throughput, and output processing, with a transparent accountability mechanism and a detailed operating process. The clarification of operations is the only way to effectively control, oversee, as well as guide technology. The regulation system should therefore inform users in detail how to bring journalistic professionalism into operation, how to embed moral principles into codes, and how to narrow the gap between value rationality and instrumental rationality.
Conclusion
21In conclusion, when human beings create and use “technium”, we participate in something bigger than ourselves (Kelly, 2012; Baber, 2010), which expands our power to create and accelerates our evolution into the future; with each new possibility that emerges, we increase the possibilities for everything else. Although the emergence of AI news anchors has the potential to augment the efficiency of information services for the journalism industry, we also need to rationally understand the potential problems accompanying artificial intelligence technology, which will allow us to meet the following ethical challenges in the future of human progress. Artificial intelligence is simultaneously changing human-machine interaction as well as mass communication, and the news industry needs to accept the forging of technology on the basis of maintaining human dignity and brainpower, better realize human-computer coupling and symbiosis, and promote the development of the information society.
Bibliographie
Baber, Z. (2010). Society: the rise of the “technium”. Nature, 468(7322), 372-373.
Baraniuk, C. (2018, November 8). China's Xinhua agency unveils AI news presenter. Retrieved from https://www.bbc.com/news/technology-46136504.
BBC (2000, April 19) “Ananova” makes her debut. Retrieved from http://news.bbc.co.uk/2/hi/entertainment/718327.stm.
Brown, R. H. (1984). Artificial Intelligence, an MIT Perspective. P. H. Winston (Ed.), 4. Cambridge: MIT Press.
Davidson, A., & Laroche, M. (2016). Connecting the dots: how personal need for structure produces false consumer pattern perceptions. Marketing Letters, 27(2), 337-350.
Deuze, M. (2005). What is journalism? Professional identity and ideology of journalists reconsidered. Journalism, 6(4), 442-464.
Dörr, K. N. (2016). Mapping the field of Algorithmic Journalism. Digital Journalism, 4 (6), 700–722.
Dörr, K. N., & Hollnbuchner, K. (2017). Ethical challenges of algorithmic journalism. Digital journalism, 5(4), 404-419.
Epley, N., Waytz, A., Akalis, S., & Cacioppo, J. T. (2008). When we need a human: motivational determinants of anthropomorphism. Social Cognition, 26(2), 143-155.
Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Ed.), Media Technologies: Essays on Communication, Materiality, and Society, 167-193, Cambridge: MIT Press.
Glasser, T. L. (1989). Three views on accountability. In E. E. Dennis, D. M. Gillmor, & T. L. Glasser (Ed.), Media Freedom and Accountability, 179-193. NY: Praeger.
Kelly, K. (2012). The Technium. Beijing: Publishing House of Electronics Industry.
Li, S. H. (2018, November 10). “AI anchors” may bring revolutionary changes to TV reports beyond the "valley of fear" of simulation technology. Retrieved from https://www.tmtpost.com/3581001.html.
Latzer, M., Hollnbuchner, K., Just, N., & Saurwein, F. (2016). The economics of algorithmic selection on the Internet. In J. Bauer, & M. Latzer (Ed.), Handbook on the Economics of the Internet. UK: Edward Elgar Publishing.
Lemley, J., Bazrafkan, S., & Corcoran, P. (2017). Deep learning for consumer devices and services: pushing the limits for machine learning, artificial intelligence, and computer vision. IEEE Consumer Electronics Magazine, 6(2), 48-56.
Maras, S. (2013). Objectivity in Journalism, 36-41. NYSE: John Wiley & Sons.
Merrill, J. C. (2011). Theoretical Foundations for Media Ethics. In A. D. Gordon, J. M. Kittross, J. C. C. Merrill, W. Babcock, & M. Dorsher (Ed.), Controversies in Media Ethics (3rd ed.), 3-32. New York: Routledge.
Schudson, M. (1989). The sociology of news production. Media Culture & Society, 11(3), 263-282. Milosavljević, M., & Vobič, I. (2019). Human still in the loop: editors reconsider the ideals of professional journalism through automation. Digital Journalism, 1–19.
Smith, R. G., & Eckroth, J. (2017). Building AI applications: yesterday, today, and tomorrow. AI Magazine, 38(1): 6-22.
Parry, R. (2011). The Ascent of Media: From Gilgamesh to Google via Gutenberg, 7. Hachette UK: Nicholas Brealey.
Reeves, B., & Nass, C. I. (1996). The media equation: how people treat computers, television, and new media like real people and places, 8. Cambridge: Cambridge University Press.
Reiter, E., & Dale, R. (2000). Building Natural Language Generation Systems,102-137. Cambridge: Cambridge University Press.
Splichal, S. (2001). Journalism and journalists. In J. Wright (Ed.), International Encyclopedia of the Social &
Behavioral Sciences, 857–861. Amsterdam: Elsevier.
Tavani, H. T. (2011). Ethics and Technology: Controversies, Questions, and Strategies for Ethical Computing. NYSE: Wiley Publishing.
Van Dalen, A. (2012). The algorithms behind the headlines: How machine-written news redefines the core skills of human journalists. Journalism Practice, 6(5-6), 648-658.
Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113-117.
Zhao, Y. (2019, March 14). From lab to screen, how does "Sogou AI synthetic anchor" approach real people? Retrieved from http://tech.ifeng.com/c/7l29zEZlDS7.
To quote this document
Ce(tte) uvre est mise à disposition selon les termes de la Licence Creative Commons Attribution 4.0 International.