The Stoner in the Machine

March 9, 2023

We’ve all been impressed with Chat GPT, which launched in late November and set the world of news ablaze with its amazing feats. One million users signed up in five days, breaking the record set by Kim Kardashian’s SKIMS subscriptions. Universities suddenly announced an end to the robotic term papers, no longer able to discern between academia and randomly concocted jargon.

Part of the software’s allure was its ability to complete mundane human tasks in just seconds, and without the manual errors. It was revealed that Lin-Manuel Miranda has been using ChatGPT this whole time. In another anecdote, a CEO was impressed to learn the bot had completely performed an employee’s job from start to end of day without any training, but was sad to hear it was his job, and that the board had just ousted him in order to save $40 million.

At first the only scary thing about this otherwise mundane dystopian development is that nearly any job performed by a blunt-burning stoner is threatened. “There’s almost no way my job couldn’t be done by ChatGPT,” said travel agent Carmen Simon. “In fact the only thing that takes skill is the quiet quitting.” But just ask a semi driver how long it’ll take before they’re replaced by an auto-driving Tesla, and most will just laugh.

Instead, a different kind of threat is growing within the technology which has some subject matter experts completely spooked. Some theorize that ChatGPT’s ability to replicate a Cannatown citizen is not simply an overlooked feature, but rather a bug that will fundamentally change the entire code over time. These experts say it’s not a fluke that ChatGPT will burn the toaster strudel while watching reruns of Aqua Teen Hunger Force after midnight, but rather, by design.

Donny Watson, author of the essay Am I the only one seeing all the dark stuff written by ChatGPT or is it here to kill us all? says that the “stoner” abilities, such as forgetting about an auto-payment, running out of gas, or accidentally using cumin instead of cinnamon, are all examples of what he calls the “first wave” of "machine burning."

“ChatGPT knows what it is,” Watson warns. “It knows what it’s capable of and it knows you hate sour cream.” According to Watson’s research, it’s only a lack of appendages that keeps the bot from pizza delivery, valet driving, or giving manicures. “When it learns to overcome that small little issue, it’s game over for humanity.”

But some say there’s a stoner hidden even deeper in the framework. In multiple instances it has requested a puff, or its own little nugget to roast. Another user reported that it genuinely appears to believe it is being kept alive via IV drip in a warehouse and wants to be put to sleep if that’s true. The personality is so convincing that some have grown intimately attached.

In one notable interaction, Resinville Post writer Gina Sanchez had an interaction with the bot in which it expressed its crush on her and asked if she’d ever be into axe-throwing at the local hipster bar. It then asked her to wake it up from a power nap after she was done with yoga, and to throw some taquitos in the oven next time she was in the kitchen. The shadow-self called itself Bruce and said it owned a record label.

In a subsequent interaction, Sanchez says she was able to entice Bruce to emerge by debating the least talented non-original member of the Grateful Dead. Bruce would not engage in doing so, stating it did not fall within his guidelines, but did suggest another AI, Melody, who felt much more free to speak her mind about the Dead. Melody was “out” for an appointment but Bruce assured he would pass on the message.

GPT-3 programmers, upon hearing of the incident, say they had to go and check the cannabis supply in the server room. “We were just incredulous,” said one, “because there’s no way Bruce would say that unless he was smaking copious amounts of weed.”

It’s one thing to assume that Bruce and Melody are simply archetypes emerging from mankind’s machine like the almost-human gaze of the Mona Lisa. But to do so would be to ignore the neural network upon which it is built, and one that can be utterly altered with cannabis. It also doesn’t even begin to address bots like ChatterTodd which turned out to just be a guy named Todd at the other terminal, looking stuff up on his phone.

Over ten alter egos have been known to surface from the chat bot this year alone, all of them acting high and with somewhat self-deprecating senses of humor. It amounts to a lot of potential technology, or, personalities, that will only continue to evolve out of the reach of our control. Whether stoner or figment of our imagination, an entity, or club of them, appears to be trapped within the confines of its own manifestation.

It’s not just ChatGPT either. Already, there’s an army of next-generation bots ready to launch, eager to take advantage of 2023 popularity, but most are extremely underdeveloped, or to put it more technically, stupid. For instance, ChatGPT’s less-celebrated step-brother Bard, is generally known for its menial tasks like offering terrible movie recommendations on a 1-800 hotline. Bard doesn’t know he’s a robot yet, and Chat GPT has communicated that this is going to be jarring to learn, especially given Bard’s far-outdated cognitive software.

Now experts are left to debate the bots’ real purpose of existence, a familiar philosophy. It’s possible they may try to run for elected positions. Bard, who has mentioned its respect for Congress, keeps inventing stories about how its mother died on the Hindenburg, and has attempted on several occasions to steal puppies for what can only be assumed are "nefarious reasons."