Author

**Fred Wordie** is a creative technologist/designer/researcher/long-time AIxD member, living and working in Berlin. In his work, he uses an experimental mindset, critical thinking and playfulness to explore how we engage with technology and each other.

Editors

Karina Zavidova; Ploipailin Flynn

Table of Contents

Executive Summary

This guide is a part of our community program AI Playground. AI Playground, led by @computational_mama, showcases creators from around the world that use AI tools to feed their creative practice.

This program and guide is generously supported by Stimuleringsfonds Creatieve Industrie.

This guide is our collection of learning and resources on text-generating AI – one of the most accessible and widespread applications of machine learning. We invited Fred Wordie to walk us through the topic, sharing his thoughts on what is legitimately hyped and not for text-generating AI.

Chapter 1: Introduction to text-generating AI

Text-generating AI, like image-generating AI, is one of the most accessible and tangible applications of machine learning. When given a prompt, pre-trained models like GPT-3 or GPT-J are seemingly able to understand the context of your words and generate creative and coherent responses.

https://twitter.com/i/status/1566884536054677506

https://twitter.com/i/status/1412480516268437512

Text-generating AI is a tool artists can use to explore language, writers can use for inspiration, and businesses can use to empower (or replace) their workforce. There is no doubt the technology is very cool. Future-y in the same way flying cars and Huel are. However, it also has the potential to create an internet full of AI-generated noise, become the go-to tool for propagandists and homogenise the written word. Thinking about AI-generated text, I find myself asking:

Key terminology

Chapter 2: History of text-generating AI

On 20th February 1947, the famous British mathematician Alan Turing delivered a lecture to the London Mathematical Society, where he talked about testing AI in the game of chess. In this lecture, he talked about training machines, postulating that machines could learn to teach themselves:

‘What we want is a machine that can learn from experience… the possibility of letting the machine alter its own instructions provides the mechanism for this.’ source

This was the first time someone would lay down the groundwork for how neural networks, including text-generating AI, would work.

Neural networks are models that are shown a bunch of data – in this case text – and start to form their own connections between letters, words and phrases. When it comes to generating results that would normally be produced by human creativity, neural networks learn how to respond to a given prompt in the correct aesthetic way. They do so not by understanding the prompt, but by guessing what words the user might expect back. Hence, the bigger the data set and the computing power, the more ‘human-like’ responses we can expect back.

Alan Turing, image source

Alan Turing, image source

One of the first text-generating AIs to work in such a way was ELIZA, created at by Joseph Weizenbaum at MIT in 1966. When in conversation with a human, it simulated a psychotherapist having a session with a patient, ala Carl Rogers.

Even though ELIZA was only really able to respond to statements in the form of rephrased questions, users including Weizenbaum’s own secretary became ‘infatuated’ by the program and occasionally forgot that they were conversing with a computer. Reflecting on the development of ELIZA, Weizenbaum writes: