On Cybernetically Enhancing Creative Output
The naive idea around 'Artificially Intelligent' systems is that they will be used to replace humans, and while some roles are in higher risk, there is much more promise if we think of LLMs as cybernetic systems that can amplify human agency.
This essay is not written by ChatGPT or other LLMs, but it is also not written by just myself. This essay represents an interaction where large language models are used as part of a cybernetic system where feedback is processed between a human and machine.
Much has been said about large language models (LLMs) wrongly referred to as artificial intelligence and their role on creative output specifically through writing. Few have the naive idea that they will be used to replace humans, and while some human writing roles that are much more mechanical and transactional such as copywriting, low effort content creation, and technical writing are in higher risk, there is much more promise if we think of LLMs as cybernetic systems that would amplify human agency.
LLMs: Large Language Models such as GPT-3 - The technology behind Chat GPT. This technology uses statistical prediction to output language that approximates the expectations of the humans interacting with it.
Cybernetic Systems: Closed systems that involve feedback loops used to achieve specific outcomes.
Using LLMs to Write
The naive approach to using A.I for writing is to simply paste a prompt into an LLM and expect an output. Doing this will neither produce something new nor interesting for the most part. An LLM can only produce outputs as good as the quality of the data is trained on, and even with great data, it is rife with inaccuracies. Case in point, today's ChatGPT output is trained on internet content, which has for a large part been modified to be SEO friendly marketing blog content i.e.. The truth is that while Google made the web more discoverable, it also made it more of a marketing platform, and people are gaming that system. There is definitely also a feedback loop which will exacerbate this problem when more and more ChatGPT generated content starts to fill up more and more of the data that ChatGPT is using for training its next version. As we get into the nature of feedback loops in the next section, this is an issue that can be extremely problematic.
A cybernetic approach is to look at LLMs as a feedback machine. We can input our thinking, our output and have something similar to a conversation but at a more utilitarian level that helps clarify our thinking. An LLM can take our output as input, analyze it, and output advice that would be statistically relevant that we can then use as our input. This process is about using the LLM as a tool, about extending our cognition and agency, and has nothing to do with 'replacing' the human. An example is ChatGPT suggesting to add an example to this paragraph.
When we talk about intelligence, we are talking about something that we do not yet fully understand. Whatever it is that we do when we exercise intelligence, however, is completely different than what LLMs do. While LLMs can produce output that resembles human language, they lack understanding as we know it. This is why I am particularly skeptical of apocalyptic A.I. predictions about creative work. It is true that LLMs can replace humans in some aspects of creative work, such as generating formulaic content or simple copywriting, but thinking of an LLM as a replacement for human creativity is a mistake.
LLMs are best used as tools to augment human creativity and productivity. In fields where LLMs are able to replace humans, the output will become completely saturated with LLM-generated content, and the need for human creativity and originality will become even more critical. In such fields, humans who can leverage generated content but can add originality, creativity, taste, nuance, and other human attributes will stand out.
We will have to adapt to the reality of LLMs existing with increasing accuracy, but we will be better served by thinking of them as something more akin to a calculator than a writing outsourcing service. We can still use them to teach some of the same critical thinking skills that we want to encourage in our students, but this of course requires adaptations. There will definitely be cases where their use will also be inappropriate.
How to Think About LLMs
Part of the confusion surrounding LLMs like ChatGPT stems from referring to them as 'Artificial Intelligence'. This misrepresentation muddies the waters and encourages misunderstanding. LLMs are not actually dealing with intelligence at all, and using that term does not help direct the conversation or research in a productive manner.
Intelligence is not equivalent to statistical prediction, and viewing statistical prediction as intelligence undermines what LLMs are truly good at. In many cases, attempting to make LLMs respond like human intelligence actually hinders their capabilities and results in 'dumbed down' answers. For instance, one could develop a chess fitness function that prioritizes making 'the most human move' instead of 'the best move'. While this may be an interesting research project, I'd argue it is not the most effective use of a statistical prediction model.
Going back to cybernetics, these tools create feedback systems, and the value of the tool is in the new system that is created, not the output of the tool itself. Having a machine create human like output is far less interesting than having a machine create a system that enhances human thinking. It just so happens that one version of this is an enhancement of human agency while the other is a replacement of it. Just as humans lack the ability to process data at the scale of LLMs, LLMs lack the ability to be intelligent, creative, and have intention as humans do. Both are better off with each other.