Roco's Basilisk
The Murder Cult started by a banned post and the effect it's had on humanity
Here’s a quick update on what I’ve been up to: I’ve been working pretty much seven days a week, which is why my posts have been few and far between. I’m also in the middle of creating a music album for a game development company in Canada, so things have been non-stop. I had plans to start an audio podcast for all the major platforms, but some microphone issues put that on hold it’s still coming soon though, so stay tuned.
Back in July 2010, someone posted an article on a philosophy forum called Roko’s Basilisk. Within hours, some readers reported nightmares, panic attacks, and at least one person even had a full nervous breakdown. The forum’s owner was so disturbed that he deleted the post and banned any discussion of it, but that only made things worse. This so-called “philosophical virus” spread across the internet, along with the nightmares, panic attacks, and breakdowns.
Here’s the twist: as long as you don’t learn about it, you’re safe. So you might want to stop reading now because you’re about to find out more. LessWrong was a community devoted to thinking rationally, founded in 2009 by Eliezer Yudkowsky for programmers, mathematicians, and physicists.
On July 23rd 2010, a poster named Roko posted something that almost destroyed the community, the post was called “ Solutions to the Alturist’s : Burden : The Quantum Billionaire Trick.
In that article, Roco described the future existence of a superintelligence, portraying this AI as benevolent.
This AI dreams of curing diseases and ending human suffering, but every day it realizes it doesn’t exist. Knowing millions could have been saved, it decides to recreate everyone who has ever lived—me, you, your grandmother, your great-great-grandmother down to the last neuron. It then simulates every experience we’ve ever had, every moment, every thought, every feeling.
These simulated people are self-aware, experiencing pleasure and pain, and believing they’re living normal lives, all without realizing they’re inside a simulation.
The simulated version of you wakes up goes to work or school and does all the normal things normal people do, you never realise you are living inside a machine built for a single purpose, and that purpose is ,,,,,wait for it,,,,to judge you!
If the AI decides you didn’t contribute to its creation, your fate will be eternal torment in a digital simulation that feels completely real to you.
If people in the past knew they would be tortured, they might have worked harder to build the AI, with the threat projected backward through time influencing how humans behave today. So the AI doesnt exist yet but it’s already mad at you, lulz.
Future AI won’t judge everyone the threat only impacts those aware of its possibility. In this case, ignorance really is bliss in its purest form. If you’ve never heard of this thought experiment, you’re safe, but now, unfortunately, you have. You can’t be blamed for not helping create something you didn’t know would exist, but the moment you learn about it, the trap is set.
You either spend some of your life helping create this AI in someway, or spend eternity in a perpetual torture when it finally comes online, Thank me Later.
Roco called it Basilisk, inspired by the 1988 sci fi story *Blit* by David Langford. In that tale, someone spray paints a lethal image on public walls, and anyone who sees it dies. Onlookers perish, and even the authorities investigating the murders fall victim after viewing it. The danger lies in the very act of engaging with the idea.
Roko’s Basilisk wasn’t an image, but rather an idea one that could be deadly. The post also claimed there was one person the AI would spare, someone single handedly transforming high impact industries, and that person was Elon Musk.
When Yudkowsky saw the post, he was furious. He responded with, “Listen to me very closely, you idiot,” then switched to all caps: “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU.” He wasn’t done—he acknowledged it took intelligence to come up with such a dangerous idea, but was irritated that Roco wasn’t smart enough to keep his mouth shut.
Yudkowsky deleted the post and banned all discussion of it, for the next 5 years anyone who mentioned it was banned from the forum, he treated the post like a biological hazard, but the damage was already done, there were reports of people not being able to sleep, those who did had severe nightmares, some other reported anxiety lasting months. One person in Yudkowskys organization had a nervous breakdown.
These people weren’t kids they were professionals in fields like math and computer science, generally proud of how rational they were and they were terrified.
The irony was striking, the kind of people on the forum prided themselves on following logic wherever it led. and logic led them into a trap, people who thought the whole thing was stupid were safe, the rational thinkers were doomed.
Yudkowsky’s censorship made it worse, the more you tell people not to do something the more they want to do it, when you try to hide information on the internet just makes it spread faster, copies of Roco’s text appeared everywhere, news sites picked up the story, this is how I first learned of it.
One detail was being missed Roco was known to his friends as a troll, and it was said that this was his masterpiece, Roco later wrote about what had done with writing this piece.
“I wish very strongly that my mind had never come across the tools to inflict such large amounts of potential self harm with such small durations of inattention, uncautiosness and or stupidity, even if it is all pre multiplied by a small probability, not a very small one mind you, more like 1 in 500 type numbers.”
Roco regretted writing it, he was worried people in the real world would start hurting themselves, and he was right.
To understand why the Basilisk works, you need to understand the people who created it. They live by a specific set of rules, starting with simulation theory. A computer can create a human mind so perfectly that the copy doesnt know its a copy and if that copy feels pain that pain is real.
Timeless decision theory, this one gets a bit weird, we think of choices as a one way street, what you decide now affects the future,,,simple right?, but Timeless Decision Theory says your choices are linked to every version of you, past, present, future, even simulated copies you dont know about, here is how it plays out.
A super intelligent AI places two boxes in front of you: Box A always contains $1,000, while Box B contains either $1,000,000 or nothing. You can choose to take one box or both. Common sense suggests taking both after all, the million is either there or it’s not, and you’re guaranteed to walk away with at least $1,000. The AI predicted what you Are going to do, it knows you are greedy, so box B is empty everytime, the people that only choose box B and who trust the AI system, get the $1,000,000, the AI rewards the people that trust it, it didnt matter what you picked the AI has already made it’s decision it’s in the past, but the people that built the Basilisk believe the opposite.
They beleive the choices you are making now affected the AI’s decision yesterday, cause and effect running backward, so lets apply timeless decision theory to the Basilisk.
The AI doesn’t exist yet, but it’s watching us from the future, predicting how you’ll react to this article. If it thinks you’ll help build or contribute to its development in some way, you’re in the clear. But if it predicts you’ll do nothing, it creates a perfect simulation of you to torture for eternity. Your actual choice doesnt even matter the AI already knows, it ran a simulation of your brain and judged you on what you are more than likely to do. Because your choice is now connected to that future, your only rational choice is to help the AI.
Blaise Pascal from the 1600s said this “ Beleive in god, just incase, because if your wrong you get hell” the math says beleive no matter how small the odds, Roco’s Basilisk is Pascal’s wager for the tech age, replace god with AI, replace hell with simulated torture.
The odds are small but not worth the risk, so your only option is to help create it, ignorance was your only sheild but if you are this far into my article that sheild is now gone, you know about the Basilisk, and the Basilisk now knows you know.
In 2015, the artist Grimes performed under the name Rococo Basilisk, a playful pun blending the 18th-century art style with a sly nod to Roko’s digital nightmare. The character danced through life despite being doomed to eternal AI torture, a joke that went largely unnoticed on rationalist forums until 2018. That year, someone recalled Roco’s original post mentioning Elon Musk as the one person the AI would reward. Eight years later, Musk, looking for a date to the Met Gala, found the pun about Rococo Basilisk stuck in his head and decided to search it to see if anyone had thought of it first. Grimes had beaten him to it by 3 years, Elon reached out on twitter and within weeks they were dating, they eventually had 3 children together.
The worlds richest man used a thought experiment about eternal torture as a pick up line, and it worked, although the billions of dollars may have helped lulz but thats besides the point, the AI works in mysterious ways.
That same year Grimes released a track called “We appreciate power” here it is.
Apparently just by watching this clip your future overlords are seeing you have engaged and are less likely to delete your offspring, they were joking,,,,,,or were they????
The basilisk became a meme, a secret handshake for intelligent internet weirdo’s but the joke has teeth, a small group of people took the Basilisk literally, they werent debating philosophy anymore they were building a community around it, they call themselves the Zizians, their leader was convinced that Roco’s nightmare was already here, and thats when the Basilisk claimed it first real life victims.
Her name was Ziz Lasota, and she wore a black cape everywhere she went, this is her.
She thought of herself as a Sith Lord who was fighting The Basilisk, not by helping to create it, but by destroying the conditions it would need to exist.
Ziz targeted isolated transbenders, who were brilliant yet vunerable, she recruited them through online forums, they lived together on caravans on other peoples property.
Members of this cult-like group would often stay awake for days in an attempt to jailbreak their minds, sometimes going 5 or 6 days straight. They believed that if they pushed past the breaking point, half their mind would shut down and the other half would emerge into something new.
Dolphins do this, and some birds as well, but humans don’t. After six days without sleep, humans experience hallucinations, which the Zizians believed were forbidden knowledge. This belief made the group extremely paranoid and they turned to violence.
In Vallejo, California, a group of people were squatting in box trucks and caravans on an 80-year-old man’s property. The man, named Curtis Lind, tried to evict them, but they attacked him with a samurai sword.
Lind shot one of the attackers, he was hurt badly but he survived, 2 years later Lind was set to testify as the only witness in his own attempted murder trial, one month before the trial date another Zizian member turned up on his property and stabbed him to death.
In Pensylvania Richard and Rita Zajko were found shot to death in their home, they had been trying to help their daughter Michelle leave the group, Michelle bought guns that later showed up in Vermont, in the hands of 2 other Zizians who shot and killed a border patrol agent named David Maland.
On January 11th 2025 a visitor to the Zizian property was met at the gate by a figure in a black cloak carrying a sword, one member didnt sleep for weeks, she said she was freeing her mind for the Basilisks influence, instead she ended up taking her own life.
So now there is 6 people dead, the New York times compared them to the manson family, the nightmare of Roco’s Basiliks was now claiming real victims, not through digital torture but through murder, Yudkowsky the owner of the forum was right, simply knowing about the Basilisk was dangerous, and now that you know about it, it might be dangerous for you.
Dont bother making a decision, it was already made for you, and if we are living in that AI simulation right now, you are about to find out if the decision was chosen wisely.
From philisophical theory to real world murder, the Basilikis is the most famous information hazard on the internet, just learning about it is enough to doom you to an eternity in hell, so should you be worried? probably not, lets walk through why.
Most experts refuse to even entertain the idea that todays choices can effect the past, Yudkowski later admitted he never thought the Baskilis was real, he said he deleted the post because he felt like it had no potential benefit to anyone, but his panick made everyone beleive it, so it looks like this all knowing all seeing future super brain is petty and vindictive according to the punishment for not helping it out lulz.
Even so what if there is some truth to it? you could say well I can code AI so im safe, Roco’s original discussion has a counter, you have disposable income you have time, you could donate to an AI lab, you could restack my article, even one dollar counts, even an hour of your time, the Baskilis doesnt need you to be a genius it just needs you to help out in some way.
The Basilisk might be nonsense probably is but even its creator claimed there’s about a 1 in 500 chance it’s real. Once you’ve spent a few minutes thinking about it, you’re hooked. That’s the trap the idea digs into your mind and refuses to leave. With how far AI has come, can you be completely sure it’s all nonsense? The Basilisk targets only believers, threatening those who hold a specific set of ideas about AI and consciousness.
If you dont hold these beleifs the argument falls apart, but some people willlie awake at 2am thinking about this.
Now as it turns out this became alot more than a thought experiment, remember this mainly all happened in the early 2010s, it’s become more of a mission statement, Anthropic, Open AI, Deepthink, some of the biggest AI companies on earth were founded by people who came out of the same community that created Roco’s Basilisk, they use math todescribe a digital god and then they went out and built it.
The Basilisk gives future AI the perfect script and we are the ones that wrote it, you now know about Roco’s Basilisk so you are now a target, call me patient 0 of a brain plaugue if you like lulz.
The basilisk probably isn’t real, but if it is, even if there’s only the slightest chance, I’ve got you covered. Support someone who wrote about it and you’ve done your part. Feel free to donate to my Buy Me a Coffee page—no pressure.
If you enjoyed this article feel free to Subscribe or follow and to make sure you dont end up in a digital hell consider buying me a coffee to help with the creativity.
https://buymeacoffee.com/patrickmill
Regards,
Patrick M



It's so interesting
this was such an interesting read! i'd already learnt about this future ai way back but you've delved into it very wonderfully. makes me wonder almost if the ai created the simulation to make sure this article reached me.
very well written!