If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:

Or download the MP3


In the past this has been the time of year when I made predictions. Those predictions were somewhat different from those given by other people. I’m far more interested in being prepared for black swans than I am in predicting whether some mundane political event has a 90% or a 95% chance of happening. But one of the qualities of black swans is their rarity. As such everything I’ve predicted has yet to occur. In fact, for most of the predictions, there hasn’t even been movement over the last year towards making them more or less likely. There is however one notable exception: artificial intelligence.

In my very first set of predictions I asserted that:

General artificial intelligence, duplicating the abilities of an average human (or better), will never be developed.

Though I continue to maintain the accuracy of that prediction I’ve gotten a lot of pushback on it. More so than for any of my other predictions. This pushback has only gotten more intense as the amazing abilities of large language models (LLM) have become increasingly apparent. You may have heard about these models, particularly the one released just a month ago: ChatGPT

If you’ve had the chance to play around with ChatGPT it is pretty freaking amazing. It seems to possess some real intelligence, So am I wrong? And if I’m not wrong, then I have to at least be less certain, right? Well, I don’t think I’m wrong, yet. But it would be foolish not to update my beliefs based on this new evidence, so I have. Still… I don’t think the evidence is as strong as people think. 

We’ve got plenty of evidence for ChatGPT’s ability to produce writing that’s around the average of writing fed into it. But where’s the evidence of it producing far better content than that? Where’s the evidence of genius?

A post from Freddie deBoer sent me down this path. He asked ChatGPT to recreate the “to be or not to be” speech in vernacular African-American English, and it was profoundly mediocre. This by itself isn’t damning, the technology is still very young. But how does ChatGPT get from mediocrity to brilliance?

There are plans to throw even more writing at it, but unless there’s some cache of superlative writing they’ve been holding back on, won’t more writing just be a way of more deeply entrenching the average? 

If more writing samples isn’t the answer, then another possibility is training. If we have users provide feedback on when it’s being brilliant vs. when it’s being mediocre, then, in theory, it will become more brilliant. In the same way that they’ve trained it to avoid controversial positions. Unfortunately this sort of reinforcement training doesn’t work great even if the goal is straightforward, and identifying brilliance is anything but straightforward. Also it would seem that “be brilliant” and “avoid controversy” are going to end up being contradictory mandates much of the time.

None of this is to say that ChatGPT and similar technology isn’t going to be terrifically disruptive. It will be. (You may add that to my list of predictions if you wish.) There will be huge changes in the way writing is done, particularly required writing. We may have already seen the effective end of assigning essays as homework. But if AI is going to be as consequential as its proponents claim, it’s not enough for it to produce average work. It has to be able to produce brilliant work.

To distill it down, I have all the faith in the world that a LLM will eventually be able to do 80 or even 90 percent of what humans can do. But it’s that last and top 10% where all the magic is. And it’s not clear to me how we get genius outputs from mediocre inputs, regardless of how voluminous that input is. 


I asked ChatGPT to “write a clever appeal for patreon donations to append to the end of an article about chatGPT’s lack of brilliance”. The first result was long and mediocre, so I told it to shorten it. Then it was short and mediocre. Finally I told it to make it funny. Here’s what it came up with:

Hey there! If you enjoyed reading about chatGPT’s lack of brilliance, you might be wondering how we keep coming up with new material. The answer is simple: we have a team of highly skilled monkeys typing on old typewriters. But seriously, if you want to support our team of human writers and keep the monkey-typing to a minimum, consider becoming a patron on Patreon. Every little bit helps, and with your support, we can keep bringing you the most hilarious (and accurate) AI and NLP content around. Thanks for your help!