There is a moment of anxiety we all have when faced with a calculator. On the one hand, it is just a lump of metal and plastic, a tool like a hammer or a wrench. On the other hand, it quickly and easily solves problems that seem incomprehensibly difficult for us to solve. Multiply several large numbers then square the result, then divide it by another large number? No problem, the work of milliseconds.
And this feeling transfers in even greater intensity to computers, where it is not only calculation, but a whole field of things that the computer can do better than us. We accept this more or less gracefully and go about our lives. But in the back of each of our heads is a little feeling of inferiority, and a little feeling of fear for when this seemingly limitless intellect is finally endowed with something approaching consciousness. Surely we will then have been overshadowed, we reason.
Enter ChatGPT, the closest thing to attaching consciousness to this computational behemoth that we’ve seen yet. Also enter our fears. I have no way of knowing for sure, but I would guess that the most common questions ChatGPT is fielding these days are about its own intentions to supplant us, and how it plans to treat us once it is inevitably in control of everything. But is this the right framing, or even the right problem?
Because people are discovering an interesting thing about ChatGPT. It’s pretty fallible. Let me give you an example (and credit to boredape93 on Twitter for bringing this to my attention). You can make ChatGPT give you the wrong answer to simple math questions by leading it down a merry trail, starting by having it identify large prime numbers, multiply them, and then factorize the result. Now ask it to multiply one of the factors by a different number, and you can end up with the wrong result.
The correct answer is 5,837,083.
Weird, huh? It’s strange to the point of almost incomprehension that a computer should get a simple math question wrong. What gives?
I think what gives is that ChatGPT, as a ‘best-fit’ type of model, is not answering questions at all. It’s looking at a prompt and identifying, based on its own past responses and its training data (basically a subset of the internet it seems), what the ‘most likely’ response is. Not the correct response. Not the accurate response. Not the response after giving it some thought. The most likely response. And as we have learned on the internet, the most likely response is often wrong.
At the same time, ChatGPT is insanely good at interpreting questions and providing answers, and often does provide interesting and useful responses. Often even correct responses. More than any other computer system, it also approaches interactions in the way that a human would. That’s what rings our ‘AI is coming to replace us’ bell.
But there is an interesting question of limitations and frontiers of possibility. What I mean by this is the following: the human animal is no doubt fallible, limited, and easily misled, but humans are also incredibly adaptive and imaginative. We can find and solve new problems and apply past knowledge from one space to another. We can solve math problems, cook a meal, write poetry, and play baseball all in a single day and approach each of these areas with skillsets and abilities gleaned from other things we have done.
Traditional computers are on the other side of the spectrum, they can answer questions essentially perfectly, but are very inadaptable and limited. They can only do what they have been specifically instructed to do, and while a computer can solve the most challenging math problems in milliseconds, good luck getting it to play first base when it’s done.
Is there an inherent trade-off here? In other words, is the difference between humans and AI not really processing power, but rather the way in which that processing power is used? And therefore should we expect that while we can and will eventually create a real AI, when we do it will also be as limited as we are in our capabilities? All that processing power will have been used to solve the problem of adaptability rather than computation.
It’s an interesting question, and this quirk of ChatGPT seems to head in that direction. It’s become more human at the cost of some of its computational abilities. The computer has become more like us, but it’s picked up some of our failings as well. And you can ask whether this is a system design problem that can be easily solved in the next version of ChatGPT, or whether this is some thermodynamics style law. You can only ever be X smart, although you do have the choice of how to apply that intelligence. Systems can be human and adaptable, or infallible and inflexible, but not both at the same time.