Purely out of Language
Language can influence the way you think. But do robots think?
Language can influence the way you think. But do robots think?
Back in 2021, before ChatGPT was a thing, the Internet was flooded with the coloured yellow and green squares of a much simpler online phenomenon: Wordle.
Wordle's appeal lay in its simplicity. You have to guess a five-letter word. Correct letters turn green; letters that are correct but in the wrong place turn yellow. And, that's basically it!
As someone who writes software, I found the same minimalist aesthetic in the website's code as well. No consultation with the home server or messages bounding through cyberspace to another country and back: rather, once I'd loaded the page, I could switch off the Internet entirely and the game would continue uninterrupted all the way till October 2027 when the preloaded games ran out. (Some people did this when Wordle was bought over by the New York Times: they wanted to stay with the original version).
ChatGPT screenshots are more eerie, philosophical, funny, and intense than the simple Wordle grids people used to share—but both of them are related to language. And, perhaps surprisingly, they are both based on very simple rules.
When my colleague Akil was in 4th grade, I happened to overhear a session of her library class. The lesson was related to classifying books, and while the librarian was explaining the intricacies of a simplified Dewey Decimal System to the rest of the class, Akil was considering the book that she was supposed to classify: Fundamentals of Trigonometry.
"I think it should go under 400 Language," she finally decided, because "It has so many words: angle; quadrant; radius..."
I later realised that Akil wasn't quite far from the truth. Mathematics is all about creating words (and symbols, rules, and definitions) to describe aspects of the world; aspects like "seven" and "subtract" which would not be so obvious otherwise. As an extreme example, numerophile Alex Bellos describes a group of people called the Munduruku who don't have numbers in their vocabulary. This means they have to (for example) name each member of the family to decide if they've got enough fish for dinner. There is, to them, literally no such thing as "seven".
What about concepts that are non-mathematical? The now-discredited but still interesting theory of "linguistic determinism" went on to say that our very thoughts were defined by what our native language is. It's also known as the "Sapir-Whorf hypothesis", which is a bit of a misnomer because Sapir and Whorf never co-authored a paper about it, and while they did have similar opinions, they didn't explicitly state them.
"What is a shelter kind of thing that is more like a door but less like a bench?" Snipette co-founder Manasa and I were trying to crack Semantle, a game inspired by Wordle in nothing but name.
Semantle is the "hot or cold" game in lingospace. You guess a word, and it gives you a score for how "close" it is to the target word. So if the target word is "wind", then "air" would be a lot closer than "grass" (but so would "clockwork"). We always begin with a scattershot approach, throwing words (fancy words, since we're Snipette editors) into the air until something scores high. Until then, we end up clutching at straws if a word like "door" is a few percentage points warmer than a word like "bench".
Getting a lead is when the excitement starts. We look back at previous words trying to find patterns, and start using phrases like "in the same direction, but more" which, if you've played Semantle once, you'll be able to immediately relate to. Words stop taking discrete meanings and start unfolding like a landscape which we know exists but can only sometimes catch glimpses of. One of the first team games we played at Snipette had us go down "fertiliser", "forest", "ecology", and "farming" to finally arrive at "cooperation"—a very generic word, to which there are so many ways to arrive, but we somehow took a very agricultural route to get there!
How does Semantle decide how "close" one word is to another? It's based on a mathematical model of the English language; the same kind that neural network engines use.
Each word is represented by a multidimensional vector, and, to find out how close two words are, you just measure the distance between the two vectors. (I can imagine "cooperation" sitting in the middle, with "ecology" and "farming" marching off to one side while other routes like "international" or "competition" branch out in other directions).
This adds an interesting dimension to the game. Not only am I exploring the landscape of words and meanings that I know in my head; I'm also peeking into the computations of a machine-learning model. The experience feels very cybernetic.
The classic example of the Sapir-Whorf hypothesis—or, shall I say, linguistic determinism—is the oft-repeated story about the Inuit having numerous words for "snow". This means Inuit speakers are able to make out the subtle differences between all these kinds of snow, whereas English speakers can't. Their language is restricting the way they think.
From the other end, languages like Himba don't distinguish between "blue" and "green" (which would incidentally resolve a long-standing debate in my family about the colour of the bathroom bucket). This means a Himba speaker would see both the colours as equivalent, just like the ancient Romans did when they described the sky as "green".
Modern linguists point out the flaws in these arguments. English speakers may not have different words for snow, but they can identify different kinds when the context demands; they just use adjectives like "hard-packed snow" instead of discrete words. Meanwhile, experiments have shown that people can distinguish between shades even if they don't have different words for them. Looking back, this makes a lot of sense: Manasa grew up with an artist for a father, so she can immediately identify a maroon or a burgundy; I can't name them but I can still tell they're not the same kind of red-brown.
That said, language can certainly influence the way you think—what is known as linguistic relativity. One experiment involved people code-switching to either German or English while watching a video, and the "German thinkers" were able to recall more details—evidently because, while people use English phrases like "I was walking", German speakers tend to also specify the goal of the action, such as where they were walking. (A more relatable example could be how, once you know the name for a feeling or ailment, you feel much better about it).
People talk about ChatGPT being "programmed to think" or "programmed to say" a certain thing. This kind of description would make sense for older chatbots, which had long, complex instructions like "If the message says 'hi' or 'hello', respond with 'hi', and check for the person's name which may come in any of the following formats: ..."
That's not at all how ChatGPT does things. What ChatGPT responds with is processed and filtered and censored later, but at its heart is something called a Large Language Model. It runs on the same principle as the autosuggest on your phone: look at what's come before, and, through statistics, predict what word is to come next.
The "through statistics" part includes going through multi-trillion-paragraph datasets, which is where the "Large" part of Large Language Model comes from. (Just because it's simple doesn't mean it's cheap or easy to arrive at!). All the words from here are collected in 'word vectors', similar to how they're saved in Semantle. But in the process, Large Language Models also save another dataset that says which vectors are likely to follow one another. All this is tuned automatically, adjusting the numbers to fit the statistics, so there's no specific instruction saying that "this word is seen to follow this word"—just a set of numbers that turned out to work best.
The beauty lies in the fact that there's nothing specific to program. It could be anything—English, Korean, or even musical notes—as long as it's "something which follows from something that came before".
Given how even the three-rule game of Wordle has given rise to complex analyses and strategies to crack the code, maybe we shouldn't be surprised; perhaps the lesson here is that even simple rules can give rise to complex results. To add a bit of creativity, ChatGPT doesn't merely pick the "most likely" word; instead, it rolls a dice and picks perhaps the second or fifth option, so that it can end up with something new and unpredictable each time.
There are many ways to process a language. Semantle forces you to consider a word and think of its context and meaning, whereas Wordle breaks that word down into its atoms, the alphabets. Wordiply, released by the Guardian to get in on the game, makes you focus on sequences of letters rather than single ones: given a word, you need to find a larger word that contains it in the same way that "caterpillar" contains the word "ate".
Somewhere in the midst of this lies ChatGPT's skill, which is also very singular: "Find the next word" is its primary mandate.
And yet, the conversations with ChatGPT and its ilk are eerily intelligible. What you get is not just "a right answer", but something complex with information, creativity, and even a tune-able personality. Prompted effectively, it can already write essays, draft emails, and pass graduate-level exams in some departments. It can also speak very confidently on various topics, (although what it says, while professional-sounding, may not be actually true).
ChatGPT is in an early version, and we can only expect its responses to get better. Depending on whom you ask, it's an exciting, scary, or uncertain future—but what I'm interested in are the philosophical implications, because in some sense this turns linguistic relativity on its head. Forget measuring language's influence on the thought process, and instead witness this:
What happens when you get a "thought process" that is created purely out of language?