It’s 2030, And Robots Have More Rights Than You Do…

Fight Censorship, Share This Post!

It’s 2030, And Robots Have More Rights Than You Do…

Authored by Mark Jeftovic via BombThrower.com,

Ruminating over our robot overlords and the missing scenario

Now that ChatGPT has exploded onto the stage, there is renewed hype around Artificial Intelligence (AI). Whenever AI captures the public imagination, we are subjected to unrestrained conjectures around how it will inevitably take over the future and change our lives.

We’re led to believe that AI will usher an era of hyper-intelligent overlords, so far advanced beyond our own coarse and analog cognitive skills that the existential question of the future will center around:

  • how much power or rights do we confer on these beings?

  • will they act benevolently or malevolently toward us?

But these questions presuppose a core assumption around AI that everybody agrees isn’t true now but will inevitably become true in the future – after a few more iterations of Moore’s Law…

That’s the idea that AI will achieve general artificial intelligence, and with that is implied some degree sentience (otherwise there is nothing to give any rights to).

The Newsweek piece on the right in the images above is by the transhumanist futurist Zoltan Istvan. He describes how AI ethicists are divided on the matter of whether future hyper-intelligent robots should be granted rights.

On one hand, by not affording human rights to robots possessing AGI (general intelligence on par with humans), we are committing a “civil rights error” that we will regret in the future.

This is opposed by those who assert that robots are machines and will never require rights, because they aren’t sentient (this is where I land on it, and I’ll tell you why below).

Others believe in a middle ground  where some robots that display general intelligence would be afforded some rights “depending on their capability, moral systems, contributions to society”  (which sounds somewhat reminiscent of a “three/fifths” clause to me).

But overall, Istvar seems to assume that AI will achieve super-intelligence, and become vastly superior beings in terms of brain-power to us clumsy meatbags of humanity.

That leaves us with three possible paths forward:

#1 Appeal to the benevolence of AI super-intelligence

“Given the possibility of reward or punishment, if machine intelligence does eventually become something like an AI god that can greatly manipulate and extend human life for good or bad, then people should immediately begin considering how our future overlord would like to be brought into existence and treated. Hence, the way humans treat AI development today—and whether we give robots rights and respect in the near future—could make all the difference in how our species is one day treated.”

This is a variation of Pascal’s Wager – a prototypical game theory construct which concluded that the consequences of believing in God and being wrong (nothingness) were better than to be wrong in not believing (eternal damnation).

#2 Hopium. Maybe the AI’s will simply leave us alone

However, according to Istvan “given our influence and the environmental destruction we cause on planet Earth”, we may “easily aggravate AI” who will take matters into their own hands to correct matters, and us. This latter scenario is a variation of Roko’s Basilisk, which is also mentioned in Istvan’s piece.

Roko’s Basilisk was a thought experiment that emerged from programmer Eliezer S. Yudkowsky’s LessWrong that shook the foundations of the site and scared the beejeesus out of otherwise super-brainiac nerds.

It’s “The Most Terrifying Thought Experiment of All Time!”,  hyper-ventilates Slate magazine.

It still  informs Yudkowsky’s thinking to this day. He’s recently promulgated the “The Alignment Problem” which assumes that humanity will inevitably create super-intelligent AIs and they will inevitably destroy us. We may as well “die with dignity”, since we’re all doomed anyway:

“tl;dr:  It’s obvious at this point that humanity isn’t going to solve the alignment problem, or even try very hard, or even go out with much of a fight.  Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with with slightly more dignity.”

These kinds of thought constructs around the inevitability of omnipotent AI’s are simply restatements of St. Anselm’s Ontological Argument. First formulated by St. Anselm of Canterbury in the 11th century. While an impressive feat of logic akin to Zeno’s paradoxes, it is simply a circular argument that God must exist:

“God is the greatest possible being that can be conceived. If such a being exists only in the mind and not in reality, then a greater being can be conceived — one that exists both in the mind and in reality.”

In simpler terms:

God is the most perfect being we can imagine, and it is more perfect to exist in reality than just in our imagination. Therefore, God must exist in reality.
— via ChatGBT session 72b43f3e-043f-4db2-aca9-63a76b7945c9

Give God a mean streak, and you have Roko’s Basilisk. Or Skynet.

#3 Upload our consciousness to the cloud and merge with the robots

Here Istvan suggests we merge with AI and attempt to guide it

A final option is we attempt to merge with early AI by uploading our minds into it, as Elon Musk has suggested. The hope is people could become one with AI and properly guide it to be kind to humans before it becomes too powerful. However, there’s no guarantee we would be successful, and it might just make AI feel violated in the long run.

This idea is ascribed to Elon Musk, although I’m sure Istvan is certainly aware that this is the essence of The Singularity espoused by the likes of Ray Kurweil (Google’s Chief Scientist) in his book The Singularity is Near. Russian Cosmists  were trying to articulate the same thing over a century ago but they didn’t have computer networks and machine learning yet to provide the foundation.

Years ago, I was supposed to be writing a book about the dangers of this techno-utopianism, and in it, I call the idea that humanity will merge with AI and vanquish all our ills, “The False Threshold”:

What would make all this possible is the virtuous cycle created by digital computer networks, powered by Moore’s Law, incessantly halving their physical footprint while doubling their processing power – eventually we would achieve, and then surpass, the interconnectivity and the processing power of the human brain itself.

When that happened, all bets were off. The assumption is that somewhere along this continuum, when the right thresholds of parallelism and computing power were surpassed, mind itself would leap out of the process – emerging with a vengeance and folding back in on itself, forking off subprocesses even more intelligent than itself, and so on, ad infinitum. “Our final invention” will then survey the world, with all its deficiencies and inefficiencies, and being infinitely smarter than all human minds combined, will deftly solve everything.

Kurzweil says this could happen as soon as 2029 and these techno-utopian visions almost always veer into some version of neo-Marxism predicated on Fully Automated Luxury Communism.

The Singularity as Rapture

The expectation of super-intelligent AI’s taking over our affairs (techno-utopianism) has all the trappings of a religion. I originally wrote about back in Transhumanism: The New Religion of The Coming Technocracy in response to a WSJ “think piece” (Looking Forward to the End of Humanity) that “Covid-19 has spotlighted the promise.. .of transhumanism and the idea of using technology to overcome sickness, aging and death”

Make no mistake, The Singularity has all the trappings of an eschatological event.  It differentiates from most Christian or monotheistic impulses because it is we who are birthing our own Gods. This dynamic of usurping (God, or in this case reality itself), gives it a distinctly Luciferian impulse.

The missing scenario is that AI will never happen.

A scenario that this article doesn’t entertain (nor any others navel gazing the future of AI) is that AI isn’t really a thing and believing that sentient, self-aware AI’s will take over the world will never happen.

(On a side-note I will say that whether the majority of the plebes become algorithmic serfs living under social credit and CBDCs is another issue entirely).

Our hand-wringing over how to deal with these super-intelligent software constructs hinges on a single, baked in assumption which is unprovable:

That is the idea that mind is an epiphenomenon of matter.

The core tenet of Scientism (notice I didn’t say “science”) is that consciousness, sentience and mind are all by-products of matter. Something that happens when certain neuro-chemicals slosh around in a brain and enough synapses fire and wire to produce self-awareness.

This is the modern day equivalent of the Ptolemaic (or geocentric) universe: the belief that the Earth was the center of the cosmos.

It was the “settled science” of its day, and disputing it would get you burned at the stake.

The reality is that matter is a by-product of consciousness, the base layer of reality is mental, not physical. This has been espoused for a long time (the Hermetic Axiom “All is mental” which was cribbed from far older texts) , it’s also the foundation of quantum mechanics.

“I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness.”
— Max Planck

Seen in this light, our brains don’t emit consciousness the way a kettle vents off steam – they’re receivers that tap into – and filter – the underlying substrate of reality. And while the foundations of quantum mechanics lays out the primacy of consciousness, it can only really be experienced via gnosis. For those who have had that experience, there is no doubt. For everybody else, there is only New Age woo woo.

Unless AI is approached with this understanding (and I have no expectation that anybody will ever take this seriously), then we can safely assume that generalized AI, sentient and self-aware, simply won’t happen.

Time for a Reality Check

AI is really the current iteration of the “flying car”. Something that was used to symbolize the future that never happened. At least not in its stereotypically posited form. This is because AI isn’t really artificial intelligence – it’s algorithmic imitation. 

While it may be very very good at algorithmically imitating accountants, lawyers, doctors, coders, copywriters and even chess grandmasters or Go champions, it still isn’t sentient, it still has no understanding of what it’s actually doing, it has no consciousness. It may as well be a toaster.

This is why torturing ourselves with what are at their core, largely theological constructs over outcomes to which we ascribe a misplaced inevitability is beyond delusional, it’s unhinged.

The error gets compounded when we actually shape public policy around these assumptions.

A very similar dynamic is playing itself out in the “climate crisis” narrative, where we are being gaslit with hypothetical constructs from computer models that are ascribed an inevitability that requires all of humanity reorder itself around them. The proposed “reconfigurations” or “recalibrations” of society (to use WEF-style euphemisms) are invariably along neo-Marxist, technocratic lines.

First thing’s first: Let’s get our own rights back right now.

The irony around all this introspection around how we treat AI’s and what rights they should have is that here in the Covid Era, we’ve just had our own basic human rights rescinded. By edict.

We didn’t get them back after the pandemic ended, as most of the emergency mandates are only conditionally “on hold”. Our civil and universal human rights are now provisional, at the behest of various unelected health authorities, bureaucrats, apparatchiks  and whatever lunacy comes out of Davos.

If we allow it, this will only get worse as “climate emergencies” approach over the horizon, and we face the real prospect of climate lockdowns, and social credit based on CBDCs, health-passes and carbon rationing.

We’ve abrogated our own rights in the present, and then quibble over which ones to bestow on inanimate software algorithms of the future.

*  *  *

My book on this topic has been on the back-burner but I still write about transhumanismAI and their implications for CBDC’scarbon rationing and social credit. Subscribe to the Bombthrower mailing list to get new articles (and I may revive the book and serialize it here). You can also follow me on me on Nostr , Gettr, or Twitter. My premium letter The Bitcoin Capitalist covers Bitcoin and crypto stocks.

Tyler Durden
Sun, 02/12/2023 – 21:15


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.