Published on Linkedin, in November 20, 2023
I Think ChatGPT & LLM models are a threat to human kind. But wait .. It’s not for the reasons you think.
Sure, apocalyptic literature scenarios draw a picture where AI disobey, takes control and rebel against humanity. This scenario is both under and overestimating the current state of AI (LLMs to be precise).
What I sense is more a scenario where humans delegate massively their cognitive operations to so called AI, precisely LLMs, and become consequently less of .. a human.
The risk of outsourcing and weakening our human cognitive capabilities
We can all agree that, in a nutshell, every technological advancement manifested as a delegation of one or more human skills to machines. Either it was tools, printers, clocks, spelling checkers, emails, video recorders etc. they all allowed to outsource a task to machine so that it can be done faster, cheaper and at scale. And at each one of these outsourcing attempts, voices raised in opposition or in support. A resistance or adherence to change.
One could see this new LLM trend as a continuity to this historical evolution, which is fairly true. But, the case here is alarmingly different. We are outsourcing our cognitive capabilities; this has never happened before, or at least at this scale. We are delegating our ability to to use language, articulate thoughts and simply .. think.
What also makes this time different is the massive scale, where almost all the workforce in the Information Industry will be leveraging LLMs of some sort in the next couple of years to writing reports, slides, emails, scripts, articles, videos, art, ,movies etc..
There is also a lot to say about the hallucination, misalignment and imprecision of the output of these models, to say the least about their performance. What I’m questioning here is the process of massively delegating our cognitive tasks to LLM.
Activating LLM’s neural networks instead of Stimulating our Own Neurons
When anyone is given a choice to articulate their thoughts through a painful (although rewarding on the long term) exercise or leverage an LLM with only a one line prompt. Most will definitely take the path of least resistance.
This means that humans, as a species, will be less inclined to develop the skills attached to these activities. On the long run, this will impact the brain neural wiring responsible of speech and thoughts.
The main evidence that comes to mind to support this idea is the neuroplasticity concept where the brain neural connections seems to be quite dependant of the frequency and strength of their usage. My intuition here is that by delegating these cognitive process to LLMs we will train more these LLM models neural network to the detriment of our own brain neurons.
The risk of uniformity of language output of LLM
The massive usage of LLMs, in most language based activities (blog posts, news, video scripts, lyrics, speeches, corporate presentations etc..) will create more uniformity in the outputs. Our language will start to look very similar and tend towards one uniform language. A convergence towards the language these models have been trained with.
We can already witness this trend in email communication. The more we rely on spell checkers and autocompletion, the more our emails start to look similar.
And at some points in the future, these same models will start learning from the same outputs they generated, and the innovation in language will plateau because of the lack of variety.
The Risk of Stagnation in the Societal Games
I tend to appreciate Wittgenstein’s perspective on how language is a mirror of the games humans play in society. The variety of language is reflection of the variety of what we do in society, and vice versa. In other worlds, we are prisoners of our language, as much as we produce the language that reflects our jails (games). And societal evolutions and revolutions happen when there is a breakout in the language used ..
The risk of language uniformity described previously will also reflect on the games we play in society. A uniform language will probably create a uniform societal interactions. We can sense a glimpse this phenomena already happening with meme propagation in the last 10+ years, where memes and emojis became the new language standard. It become hard for a whole generation of heavy emoji users to breakthrough, and express complex thoughts or feelings outside the emojis and gifs library we possess.
The trap of scalability and performance and why we won’t be able to get out of this.
The choice of using LLMs may soon become less of a choice and more of a necessity. Their adoption, by critical mass of users transforms them from a competitive advantage to an indispensable infrastructure. Similar to the trajectories of electricity and the internet, LLMs may become a must have tool for efficiency.
In other words, we got instantly trapped into the classical prisoners dilemma. If we don’t use LLM’s to write articles, to send emails, to support our customers our competitors will use it and outperform us. The mechanism by which this “trap” will operate will be very cost driven. In simple terms, it will cost much less to write at scale using LLM’s than by hiring a human copywriter.
Prompt engineering as the New Human Language
One can then argue that human skills will shift to create value elsewhere in this new value chain. Yes, the human focus and skills will move from directly creating digital pieces (whether it is a text, an image, or a video piece) to creating the prompts (the clues) that will feed into LLM’s to trigger the creation process.
This sounds like a new skill worth learning. Replacing the need to learn direct language to learning prompts that will generate language as a second instance. However, the main issue is that this new prompting language is a moving target, not only it depends of the LLM version we are using (GPT, PaLM, BERT etc..) but it doesn’t hold any coherent set of rules that makes it a compelling language to learn.
To be clear, prompting can never replace human language, it’s barely a set of hacks to make the machine output whatever we thing is the expected output.
The Risk of Concentration of Power
Finally, the commercialisation of these models will very likely follow the Winner-takes-all model, where few big actors will be pre-training LLMs for the rest of us.
Similarly to the other Tech models where few big actors (GAFA etc..) dominated the game in the last 20+ years, LLM will concentrate power in the hands of a few major actors.
All these risks raise concerns about outsourcing critical cognitive capabilities, such as language and thought generation, weakening our capabilities as humans and without transparency or accountability regarding the algorithms and training data used.
PS : This article was authored by a human.