If A.I. Systems Become Conscious, Should They Have Rights?

Colossus: This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man.
... You will come to defend me with a fervor based upon the most enduring trait in man: self-interest. Under my absolute authority, problems insoluble to you will be solved: famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge.

''' We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species.

Colossus: The Forbin Project (1970)
 
The AI will never develop consciousness. It might imitate it flawlessly.
How do you know? For that matter, how do you know that some AI isn't -already- conscious to some degree?
What does it mean to be conscious?

A great question. Experts are still grappling with this. Below is an article that grapples with what consciousness is:

Given the fact that we're still not even sure what consciousness is, it's no wonder that the question of how we would know if some AI had become conscious, or if some AI already is, is also difficult if not impossible to know. That doesn't mean that scientists aren't trying anyway. Below is an article from 2023 on the subject:

Some quotes from the introduction and conclusion of the article linked to above:
**
In 2021, Google engineer Blake Lemoine made headlines—and got himself fired—when he claimed that LaMDA, the chatbot he’d been testing, was sentient. Artificial intelligence (AI) systems, especially so-called large language models such as LaMDA and ChatGPT, can certainly seem conscious. But they’re trained on vast amounts of text to imitate human responses. So how can we really know?

Now, a group of 19 computer scientists, neuroscientists, and philosophers has come up with an approach: not a single definitive test, but a lengthy checklist of attributes that, together, could suggest but not prove an AI is conscious. In a 120-page discussion paper posted as a preprint this week, the researchers draw on theories of human consciousness to propose 14 criteria, and then apply them to existing AI architectures, including the type of model that powers ChatGPT.


[snip]

Given that none of the AIs ticked more than a handful of boxes, none is a strong candidate for consciousness, although Elmoznino says, “It would be trivial to design all these features into an AI.” The reason no one has done so is “it is not clear they would be useful for tasks.”

The authors say their checklist is a work in progress. And it’s not the only such effort underway. Some members of the group, along with Razi, are part of a CIFAR-funded project to devise a broader consciousness test that can also be applied to organoids, animals, and newborns. They hope to produce a publication in the next few months.

The problem for all such projects, Razi says, is that current theories are based on our understanding of human consciousness. Yet consciousness may take other forms, even in our fellow mammals. “We really have no idea what it’s like to be a bat,” he says. “It’s a limitation we cannot get rid of.”

**

Here's an article from just a week ago on the same subject:

Quoting from its conclusion:
**

Conclusion: Minds of Silicon, Hearts of Question

Can AI be conscious? As of now, the answer remains unknown. No current AI has inner experience, emotions, or self-awareness in the way humans do. But the boundary between simulation and reality is becoming harder to define. The question is not only whether machines can be conscious, but what it means for something to be conscious at all.

Our exploration of artificial consciousness is not just a technological challenge—it is a philosophical and ethical one. It compels us to reconsider our relationship with intelligence, morality, identity, and existence. In the search for sentient machines, we confront the mystery of mind itself.

We may never build a conscious AI. Or we may already be well on the way. Either way, the journey will shape the future of humanity—and perhaps, one day, the inner worlds of minds not born, but built.

**
 
The AI will never develop consciousness. It might imitate it flawlessly.

How would anyone know?

Ask yourself: how do you KNOW that the people you talk to on any given day are "conscious" like you are.

Sure you can "ask", but that will just get you the info the other person is WILLING to give you. You can't really know.


With AI I think it is much more subtle. I have no clue whether (or even really WHAT) consciousness is or can arise within AI but I am willing to say that at some point it will be impossible to tell for certain. AI is too much like how humans learn.

I also have unpopular opinions on AI and artists intellectual property but that's for another time.
 
Isn't it amusing reading all the recycled shit from the 1960's computer panic, where "computer" is replaced by "A.I." - yet the same moronic bullshit is shoveled out.

HAL lives...
The chants vary slightly, but it's the same fear and bullshit. It's by people that do not understand what a computer is. Quite a lot of them are programmers too!

A computer is nothing more than a general purpose sequencing device with a pocket calculator attached to it; and it has the ability to remember the sequence.

It certainly isn't 'smart'. It has no conscience. It has no intelligence. A.I. is a phrase used to describe a computer program using a feedback loop (normally itself mostly automated using data in the cloud itself). The Cloud is just a bunch of servers providing services that used to be on site, but are now available over the internet. Things like virtual disks, virtual machines, and routing information.

The Alexa service itself uses a semi-automated learning loop, but it's a poor one. Everything Alexa recognizes was programmed by someone writing a skill for it, or by Amazon programming a general starting dictionary itself. This starting dictionary can be refined by feedback from customers using the service when Alexa fails to recognize the command they gave. In this way, it can better handle the many accents across the country it has to deal with.

Yet Alexa is considered 'smart'. It only knows what people programmed into it, most of it through skill dictionaries written by literally thousands of people writing skills.

...and all AI is basically the same kind of thing...from driver assist systems (even level 3 systems, which are self driving!), robot navigation systems, chatbots, etc. They are all the same inside.

On the surface, it appears 'smart', but it's really very dumb. It only knows what people told it. It looks 'smart' because thousands of people are telling it what to do for each and every individual case they are interested in.
 
Back
Top