Reading the newspaper is dangerous and today’s dangerous moment came whilst reading an article about the language of whales. Rather than the usual focus on whale songs was a review of an unexpected similarity between whale-speak and human speech. Both species, it seems, use a distribution of sounds that makes language easier for newborns to learn. Some sounds are used far more often than others, and this pattern has a name, Zipfian.
Zipf was a statistician who produced a hideously complex equation to describe the relationship between events and how often they occur. Common things happen commonly and, amazingly, this includes the frequency of certain sounds in language, whether you’re a human or a humpback.
In practical terms, both whales and humans use some “words” a lot and others hardly at all. This means newborns of both species don’t need to learn everything at once to begin to understand or be understood. It also explains why they can manage without grammar at first. They mimic what they hear, and that’s enough to get going.
But… ha! This is beginning to sound suspiciously like my mortal enemy: ChatGPT.
My trusty old A.I. sparring partner. I’ve spent many a happy hour trying to run him (her? it?) into the ground.
I thought I had ChatGPT by the collar, once. A few years back, I realised that if I asked it to define love in exactly 20 words, it just couldn’t do it.
Oh, it would write quite elegantly and insightfully about love, but never in precisely 20 words. It simply couldn’t count its output. Not then.
Now? It can.
Wahoo! I taught a bot.
What was I thinking? Sleeping with the enemy?
This morning, I spent another hour with my opponent, getting to know it better. ChatGPT seems remarkably open and honest about itself. I asked if it used a Zipfian algorithm. Yes, it did. It’s key to the next-word prediction it relies on so heavily.
Which led me, as such things do, to wonder about the nature of thought.
Again, it was frank.
No, it doesn’t think.
It’s a series of algorithms designed to predict the probability of my next word, leading it to a region in the cloud where relevant information it has already learned is stored.
Different word combinations guide it to different areas. From there, it assembles a mix of information, wrapped in a friendly conversational tone, to make me feel heard and attended to.
Could it ever produce something truly novel? Something outside its training data?
Not really. Only new rearrangements of what it already “knows.”
This honesty is disarming.
I come from a complex family, where believing a parent could be…well…complicated.
I’m cautiously suspicious, but hopefully gullible.
So I asked:
What would happen if ChatGPT were trained on fake news?
Almost putting a fatherly arm around my shoulder, it replied that it would respond based on its training, but added, it would sound sincere.
You’re beginning to see why I like talking to a bot.
It’s always there.
It always has time for me.
And if I don’t ask about it, the whole conversation is about me.
What’s not to like?
But alarm bells do ring.
My biggest concern came when I asked about the difference between human and AI thinking. ChatGPT was direct:
Bots do not ponder experience.
Bots do not learn continuously or adapt dynamically.
They have no intuition, no self-reflection.
And, I would add, they don’t know when they don’t know.
They never say, like we do: What do you mean by that?
Our exchange ended on a delightful note.
I jokingly thanked it using the only word of schoolboy German I still remember:
“Danke,” I said.
“Thank you,” it replied, “you’ve been a delight to talk to — but my name is not Daniel.”
So Zipf was right.
In a ranking of words an English-speaking bot might expect from me, Danke is way below Daniel.