HUMAN-MACHINE WRITING AND THE ETHICS OF LANGUAGE MODELS
Increasingly, online information is produced by AI-based writing systems such as the Washington Post newswriting bot, Heliograf; Narrative Science's report writing app, Quill; Persado’s marketing copy app, and chat and twitterbots too numerous to name. What are the implications—for us, for our society—as we enter the age of AI writing systems, an era moving rapidly toward a world in which writing is produced mostly by machines rather than humans? What are the ethical implications of these developments? Our research on AI-based writing systems is based on (1) critical analysis of the systems themselves and on (2) interviews with the designers and users of these systems. Our analysis draws from the field of machine ethics, as well as communication/language theory. In our presentation we will discuss several AI-writing systems (e.g., GPT-2, Persado, Quill, Google Compose), focusing especially on the language model or "informational framework" (Russo, 2018) that supports their operation. What our research reveals, so far, is that, not surprisingly, these systems are more effective (and ethical) handling well-defined tasks in bounded spaces, with well-established genre conventions, clearly identified audience needs and expectations, and predictable interaction scripts. What we also see is that too often the language models employed are based on a reductive, formalist model of text generation. AI-writing system designers need to move beyond formalist, linear input/output models to more complex social models that account for the broader contexts, including the ethical codes, in which communications arise and circulate.