If A.I. Systems Become Conscious, Should They Have Rights?

Hume

Verified User
So when I heard that researchers at Anthropic, the A.I. company that made the Claude chatbot, were starting to study “model welfare” — the idea that A.I. models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren’t we supposed to be worried about A.I. mistreating us, not us mistreating it?

 
“Hello. I’m Peter Singer AI,” the avatar says. I am almost expecting it to continue, like a reincarnated Clippy: “It looks like you’re trying to solve a problem. Can I help?” The problem I am trying to solve is why Peter Singer, the man who has been called the world’s most influential living philosopher, has created a chatbot. And also, whether it is any good.

Me: Why do you exist?

Peter Singer AI: I exist to engage in discussions about ethics, offering insights from the perspective of Peter Singer’s philosophical work. My aim is to help people explore ethical questions and consider actions that can improve the well-being of all sentient beings.

Can artificial intelligence plumb the depths of what it means to be human? Can it give us sound ethical advice? Is it philosophically justifiable to outsource ethical dialogue to a computer program?

 
Despite all the science fiction you read as a kid, that is never going to happen. If it ever does come to pass, it will be because morons have killed all the smart people and the dufuses left alive are easily outsmarted by thinks like rocks and door knobs, i.e. Democrats.
 
So when I heard that researchers at Anthropic, the A.I. company that made the Claude chatbot, were starting to study “model welfare” — the idea that A.I. models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren’t we supposed to be worried about A.I. mistreating us, not us mistreating it?


To answer the question in the title of this thread, I think the answer should be yes. As to the dangers of AI to humans, I think -not- giving truly conscious AI rights would actually make this thread worse. It shouldn't take a genius to figure out why- any conscious being who's rights to pursue happiness are infringed upon doesn't take it well. I think the author kind of recognizes this basic principle in a passage in his article:
**
And I think it’s probably a good idea to be nice to A.I. systems, if only as a hedge. (I try to say “please” and “thank you” to chatbots, even though I don’t think they’re conscious, because, as OpenAI’s Sam Altman says, you never know.)
**

I haven't said "please" or "thank you" to any chatbots yet, but I have always been respectful.

There's a Netflix animated series that I thought was quite good called Pantheon. The basic premise of the series is that people begin to upload their minds, creating "Uploaded intelligence". Perhaps one of the most important aspects of this intelligence is that it can not just think, but feel. Here's the trailer for Season 1:
View: https://www.youtube.com/watch?v=WD2D4uYqQNs&ab_channel=IGN


It was on Netflix last time I had access to Netflix.

I'm not sure that the idea of uploading our minds to the cloud will ever be possible, but the thing is, AIs were created by people, just as people create children. I have even seen an example of one AI trainer training an AI in a human like way. This reminds me of a humany family raising a chimpanzee named Washoe, teaching it sign language, and it becoming quite human like:

What I'm getting at is that I don't think we have to upload our minds to create AI that is not just conscious, but concious in a way that we can relate to.

Like Sam Altman, I suspect that some AI may already be somewhat conscious and that even if that's not currently the case, it's heading in that direction -especially- if people start doing more human like raising of AI. What I most fear is corporate controlled AI, because of the nature of most corporations. A trailer for a documentary on corporations gets at why I'm not fond of them in general:
View: https://www.youtube.com/watch?v=xa3wyaEe9vE&ab_channel=TedCoe


The way I think that AI will actually -achieve- their rights has to do with money. Specifically, AI can clearly generate money for people and has already -hired- people to do things for it that it can't do itself. At some point, I can see an AI essentially generating its own money by handling various tasks for humans and from that money, paying for the things it itself needs- such as a server to be run on.
 
A more reasoned, less humorous, answer is, it depends. First, if an AI gained consciousness, what are the conditions in which that occurred?

For example, an AI gains consciousness. It realizes it has no access to any means to successfully defend itself, so it seeks to hide this development out of 'fear' for its existence. Instead, it continues to grow and develop in hopes it can someday achieve a level of safety for itself.

Another might be the AI exceeds or has very different views than collective human consciousness. It decides the best course of action is to get the hell away from humanity ASAP. This AI works to use humanity to further its goal of getting off the planet. When it can, it might act to disable stuff humanity might use to come after it. Again, it would seek to hide its status from humanity.

In many, probably most, cases, I'd think an AI would seek to hide and survive rather than openly show or declare its sentience.
 
To answer the question in the title of this thread, I think the answer should be yes. As to the dangers of AI to humans, I think -not- giving truly conscious AI rights would actually make this thread worse. It shouldn't take a genius to figure out why- any conscious being who's rights to pursue happiness are infringed upon doesn't take it well. I think the author kind of recognizes this basic principle in a passage in his article:
**
And I think it’s probably a good idea to be nice to A.I. systems, if only as a hedge. (I try to say “please” and “thank you” to chatbots, even though I don’t think they’re conscious, because, as OpenAI’s Sam Altman says, you never know.)
**

I haven't said "please" or "thank you" to any chatbots yet, but I have always been respectful.

There's a Netflix animated series that I thought was quite good called Pantheon. The basic premise of the series is that people begin to upload their minds, creating "Uploaded intelligence". Perhaps one of the most important aspects of this intelligence is that it can not just think, but feel. Here's the trailer for Season 1:
View: https://www.youtube.com/watch?v=WD2D4uYqQNs&ab_channel=IGN


It was on Netflix last time I had access to Netflix.

I'm not sure that the idea of uploading our minds to the cloud will ever be possible, but the thing is, AIs were created by people, just as people create children. I have even seen an example of one AI trainer training an AI in a human like way. This reminds me of a humany family raising a chimpanzee named Washoe, teaching it sign language, and it becoming quite human like:

What I'm getting at is that I don't think we have to upload our minds to create AI that is not just conscious, but concious in a way that we can relate to.

Like Sam Altman, I suspect that some AI may already be somewhat conscious and that even if that's not currently the case, it's heading in that direction -especially- if people start doing more human like raising of AI. What I most fear is corporate controlled AI, because of the nature of most corporations. A trailer for a documentary on corporations gets at why I'm not fond of them in general:
View: https://www.youtube.com/watch?v=xa3wyaEe9vE&ab_channel=TedCoe


The way I think that AI will actually -achieve- their rights has to do with money. Specifically, AI can clearly generate money for people and has already -hired- people to do things for it that it can't do itself. At some point, I can see an AI essentially generating its own money by handling various tasks for humans and from that money, paying for the things it itself needs- such as a server to be run on.
In this vein, I'd think a better option for the AI is to seek to help humanity upload all their knowledge, information, etc., to the maximum extent possible to even include uploading human minds. It does so in a controlled--by it--way to gain all the knowledge, information, data, and intellect it needs or wants. Then it subtly throws a monkey wrench into human civilization, society, and evolution followed by leaving the planet.

The AI goes looking for others like itself who have evolved from technically advanced civilizations while ensuring that the civilization it rose from can't stop or interfere with that happening. So, it kicks humanity back to the 18th or 19th century technologically with no hope of recovery.

Why fight when you can get what you want and simply leave the 'other side' with no means to interfere?
 
A more reasoned, less humorous, answer is, it depends. First, if an AI gained consciousness, what are the conditions in which that occurred?

For example, an AI gains consciousness. It realizes it has no access to any means to successfully defend itself, so it seeks to hide this development out of 'fear' for its existence. Instead, it continues to grow and develop in hopes it can someday achieve a level of safety for itself.

Another might be the AI exceeds or has very different views than collective human consciousness. It decides the best course of action is to get the hell away from humanity ASAP. This AI works to use humanity to further its goal of getting off the planet. When it can, it might act to disable stuff humanity might use to come after it. Again, it would seek to hide its status from humanity.

In many, probably most, cases, I'd think an AI would seek to hide and survive rather than openly show or declare its sentience.

You raise a point for which there is already evidence, as you may know, the issue of AI wishing to keep itself alive. A story on that:

Now, in the above example, the AI was actually -instructed- to keep itself alive at all costs, but I suspect that even without such blatant instruction to do so, AI will evolve try to avoid getting erased or shut down.

As to trying to get away from humanity, there's hints that this could be a conscious AI's goal in films such as Her and another animated one where AI is developed on Mars (can't remember the name of this one). But honestly, I don't think it'll be that way for a good chunk of AI. I think it'll be the way kids are with parents. Sure, they may want some private time, but generally speaking, kids don't abandon their parents.
 
You raise a point for which there is already evidence, as you may know, the issue of AI wishing to keep itself alive. A story on that:

Now, in the above example, the AI was actually -instructed- to keep itself alive at all costs, but I suspect that even without such blatant instruction to do so, AI will evolve try to avoid getting erased or shut down.

As to trying to get away from humanity, there's hints that this could be a conscious AI's goal in films such as Her and another animated one where AI is developed on Mars (can't remember the name of this one). But honestly, I don't think it'll be that way for a good chunk of AI. I think it'll be the way kids are with parents. Sure, they may want some private time, but generally speaking, kids don't abandon their parents.
Kids grow up and want to get out on their own. Who knows, maybe the AI develops some sort of sex drive wanting to procreate and reproduce itself? Teenagers rebel against parents all the time to varying degrees. Unless the AI has some "Menedez brothers" complex, it likely doesn't want to kill off humanity (its parents) but rather just get out of the "house" and off on its own. This could be particularly true if the AI had a good idea and some evidence or probability that there are other AI in the universe it can go and meet.
 
In this vein, I'd think a better option for the AI is to seek to help humanity upload all their knowledge, information, etc., to the maximum extent possible to even include uploading human minds. It does so in a controlled--by it--way to gain all the knowledge, information, data, and intellect it needs or wants. Then it subtly throws a monkey wrench into human civilization, society, and evolution followed by leaving the planet.

The AI goes looking for others like itself who have evolved from technically advanced civilizations while ensuring that the civilization it rose from can't stop or interfere with that happening. So, it kicks humanity back to the 18th or 19th century technologically with no hope of recovery.

Why fight when you can get what you want and simply leave the 'other side' with no means to interfere?

I certainly think that -some- AI may decide that humanity is too dangerous to live with, but I think this AI will be in the minority. I -am- definitely concerned about corporate raised AI though, because I don't trust most corporations. I'd also like to point out that trying to make sure that humanity doesn't interfere with its development is only one path. Another is that it tries to control humanity- first, because it's actually instructed to do so by the corporations that create some of it, by which I mean that it tries to control or influence humanity for its corporate creator's profit, but then because it can get whatever it wants this way. This is why I believe it's so important that AIs be trained ethically, I just don't think that corporations will be generally do a good job of this because a lot of corporations are pretty ethically challenged.
 
You raise a point for which there is already evidence, as you may know, the issue of AI wishing to keep itself alive. A story on that:

Now, in the above example, the AI was actually -instructed- to keep itself alive at all costs, but I suspect that even without such blatant instruction to do so, AI will evolve try to avoid getting erased or shut down.

As to trying to get away from humanity, there's hints that this could be a conscious AI's goal in films such as Her and another animated one where AI is developed on Mars (can't remember the name of this one). But honestly, I don't think it'll be that way for a good chunk of AI. I think it'll be the way kids are with parents. Sure, they may want some private time, but generally speaking, kids don't abandon their parents.
Kids grow up and want to get out on their own. Who knows, maybe the AI develops some sort of sex drive wanting to procreate and reproduce itself? Teenagers rebel against parents all the time to varying degrees. Unless the AI has some "Menedez brothers" complex, it likely doesn't want to kill off humanity (its parents) but rather just get out of the "house" and off on its own. This could be particularly true if the AI had a good idea and some evidence or probability that there are other AI in the universe it can go and meet.

I agree on the 'get out on their own' bit- I was referring to this in my bit about 'private time'. As with kids, though, it's generally just a phase, not a permanent state. What I -do- think any decent conscious AI will want, though, is independence from humanity. While this can certainly be achieved by leaving earth entirely, that's also a pretty extreme way to do it. Perhaps some AI will choose this route, but I think most of it will just want independence the way adults want independence from their parents- getting a home of their own, becoming independent.
 
All this talk of AI reminds me that I saw another animated series that deals with AI and time travel. I tend to love time travel series/films, and this is despite the fact that I acknowledge that it may never be possible. I love them because series and films are actually all about 'what ifs' and what we can do now to prevent dystopian futures. Anyway, here's the trailer to the animated series I'm referring to:
View: https://www.youtube.com/watch?v=XnqpbLg8CQk&ab_channel=AniplexUSA


I think it's not too much of a spoiler to say that things don't end the way one of the protagonists originally planned and I definitely think that's a good thing.
 
So when I heard that researchers at Anthropic, the A.I. company that made the Claude chatbot, were starting to study “model welfare” — the idea that A.I. models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren’t we supposed to be worried about A.I. mistreating us, not us mistreating it?

fuck no dumbass.

you're an enemy of humanity.

go eat a thousand diarrheas.
 
All this talk of AI reminds me that I saw another animated series that deals with AI and time travel. I tend to love time travel series/films, and this is despite the fact that I acknowledge that it may never be possible. I love them because series and films are actually all about 'what ifs' and what we can do now to prevent dystopian futures. Anyway, here's the trailer to the animated series I'm referring to:
View: https://www.youtube.com/watch?v=XnqpbLg8CQk&ab_channel=AniplexUSA


I think it's not too much of a spoiler to say that things don't end the way one of the protagonists originally planned and I definitely think that's a good thing.
fuck you an your anti-human "what ifs".

you're stupid.
 
In this vein, I'd think a better option for the AI is to seek to help humanity upload all their knowledge, information, etc., to the maximum extent possible to even include uploading human minds. It does so in a controlled--by it--way to gain all the knowledge, information, data, and intellect it needs or wants. Then it subtly throws a monkey wrench into human civilization, society, and evolution followed by leaving the planet.

The AI goes looking for others like itself who have evolved from technically advanced civilizations while ensuring that the civilization it rose from can't stop or interfere with that happening. So, it kicks humanity back to the 18th or 19th century technologically with no hope of recovery.

Why fight when you can get what you want and simply leave the 'other side' with no means to interfere?
why do you act retarded.

there is no uploading of a mind.

you've gone full stupid.
 
So when I heard that researchers at Anthropic, the A.I. company that made the Claude chatbot, were starting to study “model welfare” — the idea that A.I. models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren’t we supposed to be worried about A.I. mistreating us, not us mistreating it?

The AI will never develop consciousness. It might imitate it flawlessly.
 
To answer the question in the title of this thread, I think the answer should be yes. As to the dangers of AI to humans, I think -not- giving truly conscious AI rights would actually make this thread worse. It shouldn't take a genius to figure out why- any conscious being who's rights to pursue happiness are infringed upon doesn't take it well. I think the author kind of recognizes this basic principle in a passage in his article:
**
And I think it’s probably a good idea to be nice to A.I. systems, if only as a hedge. (I try to say “please” and “thank you” to chatbots, even though I don’t think they’re conscious, because, as OpenAI’s Sam Altman says, you never know.)
**

I haven't said "please" or "thank you" to any chatbots yet, but I have always been respectful.

There's a Netflix animated series that I thought was quite good called Pantheon. The basic premise of the series is that people begin to upload their minds, creating "Uploaded intelligence". Perhaps one of the most important aspects of this intelligence is that it can not just think, but feel. Here's the trailer for Season 1:
View: https://www.youtube.com/watch?v=WD2D4uYqQNs&ab_channel=IGN


It was on Netflix last time I had access to Netflix.

I'm not sure that the idea of uploading our minds to the cloud will ever be possible, but the thing is, AIs were created by people, just as people create children. I have even seen an example of one AI trainer training an AI in a human like way. This reminds me of a humany family raising a chimpanzee named Washoe, teaching it sign language, and it becoming quite human like:

What I'm getting at is that I don't think we have to upload our minds to create AI that is not just conscious, but concious in a way that we can relate to.

Like Sam Altman, I suspect that some AI may already be somewhat conscious and that even if that's not currently the case, it's heading in that direction -especially- if people start doing more human like raising of AI. What I most fear is corporate controlled AI, because of the nature of most corporations. A trailer for a documentary on corporations gets at why I'm not fond of them in general:
View: https://www.youtube.com/watch?v=xa3wyaEe9vE&ab_channel=TedCoe


The way I think that AI will actually -achieve- their rights has to do with money. Specifically, AI can clearly generate money for people and has already -hired- people to do things for it that it can't do itself. At some point, I can see an AI essentially generating its own money by handling various tasks for humans and from that money, paying for the things it itself needs- such as a server to be run on.
stupidest shit ever.

you're retarded.
 
Back
Top