The Unfair Standard for ChatGPT
Intro
ChatGPT re-wrote the book on AI seemingly overnight.
Artificial intelligence has been a blooming area of tech for a few years now with more and more people getting excited about AI, machine learning, data science, and similar disciplines. Most of the talk was isolated to tech communities while wider circles often refer to AI as simply "the algorithm" for whatever service they might want to talk about.
ChatGPT surged artificial intelligence into the forefront and now everyone is playing with it from having it write code, writing marketing copy, even entire AI generated books have begun to be published on Amazon. This has created a lot of excitement, a lot of fear, and a lot of hot takes. Will AI replace humans? Is ChatGPT sentient? How long is the ChatGPT Plus wait-list going to last.
I won't really be touching on most of that as I am neither a physic nor do I play one on late-night commercials. But instead, I want to ask the open-ended question of if we are being unfair in our judgement and expectations of artificial intelligence and offer some of my thoughts of why that may be the case.
Each of the cases that I touch on below is a common sentiment I've seen expressed in ChatGPT discussions that are not necessarily wrong, but not necessarily the limitation or issue it's presented to be. I will refer to AI as "ChatGPT", but most everything will be able to apply to any similar generative AI.
ChatGPT writes really bad code that shouldn't exist in a production environment.
I've got bad news to give you about every software developer ever.
Professional developers write bad code all the time and get paid quite handsomely to do it. They ship code that fails tests, logic that is incorrect, or sometimes code that doesn't even compile. I'm not trying to beat up on my entire industry or make it seem like we are dogs pawing at a keyboard, but we are fallible humans and we make mistakes.
To begin to establish the core trend early, we are already expecting AI to be not just as capable as a human but more capable than a human. Like humans, ChatGPT needs experienced programmers to teach it to be better. And that's exactly what OpenAI is doing.
ChatGPT will confidently tell you completely incorrect information.
There are some people who will present themselves as an authority figure over a subject matter that they almost couldn't know less about. Let's take that scenario off the table.
When you think you know something, you think you know. There is no reason to not be confident. For the second time now, we have put AI up on a pedestal where we judge it's utility based on a level that we ourselves have not been able to reach.
The argument can be made that re-training an algorithm and re-training a human brain aren't apples to apples, and that's certainly true, but AI also lacks that wonderful quality found in humans to receive conflicting information from a reliable source and choose to disregard it. On the topic of sources, quality matters.
ChatGPT isn't capable of understanding vague requirements from a user to create software.
I'm concerned I may be beating a dead horse at this point, but we haven't exactly figured that out either. We get it right often enough, but many times we ship code that is not 100% aligned with the project owner or user's expectations. Or we get it right but those expectations change. In any case, we have created entire development methodology frameworks to try to solve this problem (amongst others) and I wouldn't yet feel confident in saying it's a solved problem.
I reckon my job is safe for at least for a few more months, but this problem is easily remedied by continued training and improvements to predictive and generative AI models. The benefit and brilliance of ChatGPT's open beta is that the amount of feedback and data that has been collected to improve the model is tremendous to an almost unfathomable degree. I think the value of this is quickly becoming apparent to those experimenting with ChatGPT-4.
ChatGPT will never be able to do everything, AI can only be specialized.
I am not qualified to know for certain how likely this is to be true or false. My gut feeling is that hardware would quickly become the bottleneck to be able to iterate and train on such a massive amount of data. But maybe not?
In any case, needing to specialize would make AI more human.
There are certainly generalists and people who have "T" shaped knowledge, or a more shallow knowledge across a wider range of topics and a deep emphasis or specialization in a much smaller number of topics. But if you walk into a hospital, you will find people who specialize in just the heart, or just the lungs. People dedicating years of formal education and decades of practice around your feet or your brain.
If given a choice between an average general-purpose AI and an exceptional specialized AI, I would choose the latter.
Some Last Thoughts
I generally think that "prompt based" interaction with information systems will become the more normal. There has already been a lot of that happening for a while. I used to be a digital marketer specializing in paid search and a large number of my searches were things like "Hey Google, how much would a note on a Honda Civic be?".
AI is going to become an increasingly prevalent part of digital society. There is not a single industry that is not capable of benefitting in some way from the latest generations of artificial intelligence.
We are going to soon have to challenge our current beliefs and frameworks around what constitutes sentience or intelligence.
And when AI takes my job, I am going to open up a brewery that refuses to sell IPAs.
But most importantly...
I think we should stop comparing AI to ourselves and certainly stop expecting it to need to be better than us in some measurable way in order to produce large sums of value and utility with it. Which is the core point of why I sat down to write this.