To answer the question in the title of this thread, I think the answer should be yes. As to the dangers of AI to humans, I think -not- giving truly conscious AI rights would actually make this thread worse. It shouldn't take a genius to figure out why- any conscious being who's rights to pursue happiness are infringed upon doesn't take it well. I think the author kind of recognizes this basic principle in a passage in his article:
**
And I think it’s probably a good idea to be nice to A.I. systems, if only as a hedge. (I try to say “please” and “thank you” to chatbots, even though I don’t think they’re conscious, because, as OpenAI’s Sam Altman says, you never know.)
**
I haven't said "please" or "thank you" to any chatbots yet, but I have always been respectful.
There's a Netflix animated series that I thought was quite good called Pantheon. The basic premise of the series is that people begin to upload their minds, creating "Uploaded intelligence". Perhaps one of the most important aspects of this intelligence is that it can not just think, but feel. Here's the trailer for Season 1:
View: https://www.youtube.com/watch?v=WD2D4uYqQNs&ab_channel=IGN
It was on Netflix last time I had access to Netflix.
I'm not sure that the idea of uploading our minds to the cloud will ever be possible, but the thing is, AIs were created by people, just as people create children. I have even seen an example of one AI trainer training an AI in a human like way. This reminds me of a humany family raising a chimpanzee named Washoe, teaching it sign language, and it becoming quite human like:
en.wikipedia.org
What I'm getting at is that I don't think we have to upload our minds to create AI that is not just conscious, but concious in a way that we can relate to.
Like Sam Altman, I suspect that some AI may already be somewhat conscious and that even if that's not currently the case, it's heading in that direction -especially- if people start doing more human like raising of AI. What I most fear is corporate controlled AI, because of the nature of most corporations. A trailer for a documentary on corporations gets at why I'm not fond of them in general:
View: https://www.youtube.com/watch?v=xa3wyaEe9vE&ab_channel=TedCoe
The way I think that AI will actually -achieve- their rights has to do with money. Specifically, AI can clearly generate money for people and has already -hired- people to do things for it that it can't do itself. At some point, I can see an AI essentially generating its own money by handling various tasks for humans and from that money, paying for the things it itself needs- such as a server to be run on.