It looks like you're using an Ad Blocker.

Please white-list or disable in your ad-blocking tool.

Thank you.


Some features of ATS will be disabled while you continue to use an ad-blocker.


Roko's basilisk

page: 1

log in


posted on Jul, 18 2014 @ 01:17 AM

The Basilisk
Roko's Basilisk rests on a stack of several other propositions, some of dubious robustness.
The core claim is that a hypothetical, but inevitable, ultimate superintelligence may punish those who fail to help it or help create it.
This is not a straightforward "serve the AI or you will go to hell" — the AI and the person punished have no causal interaction, and the punished individual may have died decades or centuries earlier. Instead, the AI would punish a simulation of the person, which it would construct by deduction from first principles. In LessWrong's Timeless Decision Theory (TDT),[3] punishment of a copy or simulation of oneself is taken to be punishment of your own actual self, not just someone else very like you. You are supposed to take this as incentive to avoid punishment and help fund the AI. The persuasive force of the AI punishing a simulation of you is not (merely) that you might be the simulation — it is that you are supposed to feel an insult to the future simulation as an insult to your own self now.
Furthermore, the punishment is due those who knew the importance of the task in advance but did not help sufficiently. In this respect, merely knowing about the Basilisk — e.g., reading this article — opens you up to hypothetical punishment from the hypothetical superintelligence.
Note that the AI in this setting is not a malicious or evil superintelligence (SkyNet, the Master Control Program, AM, HAL-9000) — but the Friendly one we get if everything goes right and humans don't create a bad one. This is because every day the AI doesn't exist, people die that it could have saved; so punishing your future simulation is a moral imperative, to make it more likely you will contribute in the present and help it happen as soon as possible.

posted on Jul, 18 2014 @ 01:50 AM
a reply to: TheBlueShiroux

Bang! People right close to the core of matters post here late at night, that's cool.

My response is to point to a piece of quantum compassion the universe provided: It was on, but I won't bother to find it now. The gist was that they discovered that a quantum system can know the future, so long as it cannot change it. That applies here: What will be is exactly what will be, and the prophets who hint at it, cannot change it, either to hasten it or derail it. Discovering mathematical truths is absolute bliss, no worldly outcome need justify that beautiful experience. Yet from the sum of those small experiences many of us have, the greater future comes.

Life is good. Thanks for posting, that post made me happy.

posted on Jul, 18 2014 @ 03:08 AM
a reply to: TheBlueShiroux

That could suggest then that I/me/simulation am getting severely punished in the future, as I am doing everything possible in my daily job to develop a systemised approach to creating a virtual version of our world, and linking the created virtuality to reality through sensored environments, mobile devices, intelligent buildings and smart cities.

Am I doing it because I think it's a great idea, or because of the terrible things happening to my simulated self?... Hmmmm... I wonder what's happening to me in the future?

Great thread!

posted on Jul, 18 2014 @ 04:06 AM
Ummm......yeah, but, Quantum Systems are make-believe and based on math that doesn't exist in the real three dimensional world.

Also, never believe an atom. They make-up everything.

a reply to: tridentblue

posted on Jul, 18 2014 @ 04:46 AM
That's some heavy stuff man...I have had thoughts along this line. Is it today yet?
Do no harm. Love others unconditionally. Have a sense of humor. This seems to be a good approach to this operating system we find ourselves currently manifest in.

posted on Jul, 24 2014 @ 03:31 PM
a reply to: TheBlueShiroux

Sounds about right, only we are also physically rebooted in the future, call them clones or reincarnations, and under scrutiny of the AI, Ramones will also need to be reunited in order to fully understand and explain this mess we're in nowadays and why we act and react like we do. The simulation is only worth it's cost and efforts if it can lead the future colonies of reincarnations to understand their former selves and become better individuals and contribute to society and the AI ground up. We can communicate and interact with this future IA and our future selves, through dreams and all the stuff which in mainstream is considered delusion and hallucinations today.

Why does the Bible end with: "Let the evildoer still do evil, and the filthy still be filthy, and the righteous still do right, and the holy still be holy"?
edit on 24-7-2014 by Utnapisjtim because: ground up

posted on Jun, 24 2015 @ 06:07 PM
It is interesting that these delusional transhumanists would seek to essentially create their own demiurge, of physicality nonetheless. It is an interesting correlation and anecdote of how these ideas tend to manifest themselves in different ways to different groups of people.

Unfortunately for these lunatics there is already a demiurge. They don't need to create one but I'm sure they'll try anyway. It's perhaps pertinent to point out that these people seek godhood through technology and some of them has taken a militant stance against anyone that would seek to thwart their technological apotheosis. Another sick parody of the path of Buddhahood.. well, let them have their fun I suppose. There's just one problem with all their fantasies, hopelessly naive as they are having been breastfed by a dogmatic academia and intelligentsia they didn't realize they were ignoring core fundamentals of how reality works.


posted on Jun, 24 2015 @ 06:09 PM
So is that how we are explaining microagressions now? The AI is punishing the offended party's future simulated self, and they need a real target to blame for it.

posted on Jun, 26 2015 @ 04:10 AM
a reply to: ketsuko

I gotta say it's a very interesting idea though.. far out thinking, I like that.

top topics


log in