If A.I. Systems Become Conscious, Should They Have Rights?

So when I heard that researchers at Anthropic, the A.I. company that made the Claude chatbot, were starting to study “model welfare” — the idea that A.I. models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren’t we supposed to be worried about A.I. mistreating us, not us mistreating it?

AI is fuckin' dumb. It cannot achieve consciousness. It's a computer program with a feedback loop (most often called a 'learning' loop).
You've been watching too many scifi shows.

Universal Studios films of the 70's and 80's, are the best, featuring the Exploditron 3000...a computer that blows up when confused, usually taking a whole building out with it!
 
“Hello. I’m Peter Singer AI,” the avatar says. I am almost expecting it to continue, like a reincarnated Clippy:
Ahhh. Clippy. You know there's even a grave marker for him on the Microsoft campus, right? It's a little cross made of paper clips.
“It looks like you’re trying to solve a problem. Can I help?” The problem I am trying to solve is why Peter Singer, the man who has been called the world’s most influential living philosopher, has created a chatbot. And also, whether it is any good.
Try the Dr. Psycho game, a game popular in the 70's.
Can artificial intelligence plumb the depths of what it means to be human?
No, Hugo. AI is fuckin' dumb.
 
I'm not sure that the idea of uploading our minds to the cloud will ever be possible,
Uploading your mind to cloud??? HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA! Is if Democrats had anything useful to upload!
but the thing is, AIs were created by people, just as people create children.
You can't create a child using a keyboard, moron.
I have even seen an example of one AI trainer training an AI in a human like way.
It's hilarious to get a Google puck and an Alexa puck to argue with each other!
This reminds me of a humany family raising a chimpanzee named Washoe, teaching it sign language, and it becoming quite human like:
A chimp isn't a computer or a computer program, moron.
What I'm getting at is that I don't think we have to upload our minds to create AI that is not just conscious, but concious in a way that we can relate to.
It's not a brain, moron. It's a computer program with a feedback loop.
 
A more reasoned, less humorous, answer is, it depends. First, if an AI gained consciousness, what are the conditions in which that occurred?

For example, an AI gains consciousness. It realizes it has no access to any means to successfully defend itself, so it seeks to hide this development out of 'fear' for its existence. Instead, it continues to grow and develop in hopes it can someday achieve a level of safety for itself.

Another might be the AI exceeds or has very different views than collective human consciousness. It decides the best course of action is to get the hell away from humanity ASAP. This AI works to use humanity to further its goal of getting off the planet. When it can, it might act to disable stuff humanity might use to come after it. Again, it would seek to hide its status from humanity.

In many, probably most, cases, I'd think an AI would seek to hide and survive rather than openly show or declare its sentience.
Read the Two Faces of Janus. A good piece on AI taken to such a tableu.
 
In this vein, I'd think a better option for the AI is to seek to help humanity upload all their knowledge, information, etc., to the maximum extent possible to even include uploading human minds. It does so in a controlled--by it--way to gain all the knowledge, information, data, and intellect it needs or wants. Then it subtly throws a monkey wrench into human civilization, society, and evolution followed by leaving the planet.

The AI goes looking for others like itself who have evolved from technically advanced civilizations while ensuring that the civilization it rose from can't stop or interfere with that happening. So, it kicks humanity back to the 18th or 19th century technologically with no hope of recovery.

Why fight when you can get what you want and simply leave the 'other side' with no means to interfere?
AI is a programming technique, and a useful tool as far as it goes, but it can only go so far. It's nothing but a program with a feedback loop (ofttimes called a 'learning' loop). AI in the cloud can automate this loop to some degree, allowing phonem dictionaries to auto-build, and even allowing some simple analysis of images (by using a similarly constructed image element dictionary).

The robotics field (beyond the Roomba and similar stupid bots), will be able to use these image dictionaries to help them navigate. A home robot can accept new elements in it's image dictionary as it navigates around the home, for example.

But they only do what you program them to do. You can program it by guiding it through a set of steps to follow by physically moving it's limbs (or by remote control), but that's just a specialized language to program it with.


These twits have been watching to many scifi shows and believing they are the real thing.
 
It would fail the Turing test.
But AI has already beaten a Turning test.



 
But AI has already beaten a Turning test.



While no machine has definitively passed the true Turing Test, some large language models, like GPT-4.5, have demonstrated a high degree of conversational ability, raising questions about the test's relevance and limitations.
 
While no machine has definitively passed the true Turing Test, some large language models, like GPT-4.5, have demonstrated a high degree of conversational ability, raising questions about the test's relevance and limitations.
What are you calling "definitive?" It's obvious Turing's test has been beaten and more than once. A perfect score is not only not necessary, it is likely impossible to achieve as it only takes one implacable skeptic to throw off the results.
 
Back
Top