by Jean-Louis Gassée
Seemingly overnight, but not really out of nowhere, ChatGPT has gained fame for making once-nebulous AI concepts accessible to “normal” people.
[Apologies: a domain name problem, combined with a mistake of mine, make this Monday Note late.]
Before I decode today’s title, let’s take a look at “The state of AI in 2022 — and a half decade in review”, a serious and wide-ranging McKinsey article by Michael Chui published on December 6 (keep that date in mind). The piece starts as follows [as always edits and emphasis mine]:
“Adoption [of artificial intelligence] has more than doubled since 2017, though the proportion of organizations using AI 1 has plateaued between 50 and 60 percent for the past few years.”
If you read the study in its entirety, keep in mind that McKinsey is one of the largest US consulting firms that terrorize advise giant corporations and foreign governments alike. They, and their brethren, are in the business of sagely assisting evolution, absorbing — or ameliorating — the latest fashions without disturbing the status quo.
So it’s surprising that when you search for occurrences of OpenAI, ChatGPT or Dall.E you come up empty. Not a word. That a serious article about AI — published just last week — has seemingly ignored these tools is an indication of how quickly the technologies have emerged.
Today, we’ll set Dall.E aside and look at ChatGPT, an interactive language model that’s “trained to follow an instruction in a prompt and provide a detailed response”. In other words, it takes part in a conversation, composes a letter, answers your questions…at your request.
If you haven’t already, I suggest you go to https://chat.openai.com/chat and open an account. It’s great fun — profound, in my opinion — and has caused ChatGPT examples to explode on Twitter and elsewhere. Such as this one:
Just like that, sounding as soulful as a routine corporate exchange. The eagle-eyed Monday Note reader will notice the typo add for had, which was politely corrected and understood by the polite robot.
Suddenly, The Rest of Us have gained easy access to a friendly, easily understood implementation of AI technology. Just as suddenly, companies that have been regarded as pioneering AI forges now look a little slow. Why haven’t Google, Meta, Microsoft, and Apple created a user-friendly AI chat robot?
We haven’t yet heard much from the establishment. Unofficially, it seems some corporate chieftains have told their flock to hurry up and “do something”, to not let a small organization take control of the public discourse. In the meantime, the official response follows the ancient wisdom codified by IBM when it still drove industry discourse: It’s nothing. You don’t need it. And we’ll have it in six months.
We saw the routine at work in September 2013 after Apple announced its first 64-bit A7 processor. The establishment’s response? It’s nothing, a mere marketing ploy.
Let’s ask ChatGPT to play this game:
Me: ”write an old IBM-style memo dismissive memo in reaction to ChatGPT first claimimg “It’s nothing” next adding “You don’t need it” and finally stating “And we’ll have it in six months”.
ChatGPT: “First and foremost, we believe that ChatGPT is nothing more than a passing fad…”
I don’t want to occupy too much space in this week’s essay reproducing entire ChatGPT responses, but I encourage you to try the above.
An impressive feature of ChatGPT is that its “stateful”, it remembers our conversation and can add to it. When I type…
“Write a memo from Big Company CEO John Doe directing staff to designe a web swervice to compete with ChatGPT.”
…the friendly robot ignores the mistakes and produces a believable note, but it’s a bit short and doesn’t include schedule. No problem:
“add a six month implementation schedule to the above”
Out pops a fully-formed CEO pronouncement.
I should add that ChatGPT speaks French (trust me) and also Spanish. This is brief enough:
Inevitably, one asks: Is this thing actually intelligent?
The question dates back to the early days of computing when, in honor of pioneer Alan Turing (who died in 1954, tragically mistreated by his contemporaries — and only pardoned in 2013), we agreed on the Turing Test. In a nutshell, the test says we’ll know a machine is intelligent when it’s so adept at carrying on a conversation that we’re completely fooled, believing a human is on the other side of the communication line.
Today, a few quick tests break the illusion. For instance, ask ChatGPT to construct your bio. As many on Twitter have experienced, mine was riddled with errors. Not only is this “thing” not intelligent, but it deals in wrong information.
Nonetheless, many innovators and entrepreneurs, such as Aaron Levie, the successful and articulate founder of Cloud company Box, see the positive in ChatGPT:
“[It] will likely play out exactly as innovator’s dilemma suggests. To an expert in any given field, it has worse answers. But most people don’t have access to experts for everything, so it actually is a productivity boost to everyone else.”
Indeed, where McKinsey decided to see a plateau, others agree with Levie that a “language model” is just the beginning of a service that can provide a great benefit to “normies”:
“It feels elitist to knock something that will help 95% of the population, just because it’s not as good as what the top 5% can do. Yet another opportunity to remind ourselves to not let the perfect be the enemy of the good.”
Parents and teachers wonder what will happen to homework if ChatGPT can “write an essay discussing the birth of the Second Amendment and the controversies arising from the wide availability of firearms”. I just tried, it might be a bit short for some assignments but still provides a decent “starter”, as in baking or brewing, for a more fully-baked submission.
As for what is called AGI (Artificial General Intelligence), we have the 2002 bet between AI pioneer Ray Kurzweil and Lotus founder Mitch Kapor:
“Ray Kurzweil maintains that a computer (i.e., a machine intelligence) will pass the Turing test by 2029. Mitchell Kapor believes this will not happen.”
The due date is getting closer, and possibly dangerous. Surprisingly, ChatGPT doesn’t have access to the Internet, it relies on “training data”, a huge repository of texts that stops in 2021. As ChatGPT gains a more sophisticated language model and a broader audience — and is able to reach the web — it will itself become the source of data, not all of it accurate, and all of it injected into the Internet. How soon it will become a contributor to the world’s “knowledge” is hard to predict, but we see other AI companies beginning to Embrace And Extend ChatGPT.
This is just a beginning, one that reminds me of the early Personal Computer days when limited, incomplete machines gained great favor because, unlike “serious” machines, they were accessible to us, the unwashed masses.
I’ll finish with two quotations. One from Sam Altman, OpenAI’s co-founder (the other co-founder is a certain Elon Musk, now gone):
“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness…it’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”
The other is extracted from a David Heinemeir Hansson December 6 post. (DHH, as the Danish author is known, is a prolific technologist, entrepreneur, author, Le Mans race car driver, “…and family man”):
“ChatGPT is blowing minds left and right including mine. It’s placed a second dot on what appears as an exponential curve of AI competency, following the huge leaps in creative image generation already made this year. So it’s our nature to imagine — or dread! — where the next dot will land, and what perhaps the not-so-distant future will hold for humanity. But remember: Nobody Knows Anything!”
We’re not about to be bored.
— jlg@gassee.com