Isn't it amusing reading all the recycled shit from the 1960's computer panic, where "computer" is replaced by "A.I." - yet the same moronic bullshit is shoveled out.
HAL lives...
HAL lives...
uploading consciousness is a retard concept.Isn't it amusing reading all the recycled shit from the 1960's computer panic, where "computer" is replaced by "A.I." - yet the same moronic bullshit is shoveled out.
HAL lives...
AI is fuckin' dumb. It cannot achieve consciousness. It's a computer program with a feedback loop (most often called a 'learning' loop).So when I heard that researchers at Anthropic, the A.I. company that made the Claude chatbot, were starting to study “model welfare” — the idea that A.I. models might soon become conscious and deserve some kind of moral status — the humanist in me thought: Who cares about the chatbots? Aren’t we supposed to be worried about A.I. mistreating us, not us mistreating it?
![]()
If A.I. Systems Become Conscious, Should They Have Rights?
As artificial intelligence systems become smarter, one A.I. company is trying to figure out what to do if they become conscious.www.nytimes.com
Ahhh. Clippy. You know there's even a grave marker for him on the Microsoft campus, right? It's a little cross made of paper clips.“Hello. I’m Peter Singer AI,” the avatar says. I am almost expecting it to continue, like a reincarnated Clippy:
Try the Dr. Psycho game, a game popular in the 70's.“It looks like you’re trying to solve a problem. Can I help?” The problem I am trying to solve is why Peter Singer, the man who has been called the world’s most influential living philosopher, has created a chatbot. And also, whether it is any good.
No, Hugo. AI is fuckin' dumb.Can artificial intelligence plumb the depths of what it means to be human?
Uploading your mind to cloud??? HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA! Is if Democrats had anything useful to upload!I'm not sure that the idea of uploading our minds to the cloud will ever be possible,
You can't create a child using a keyboard, moron.but the thing is, AIs were created by people, just as people create children.
It's hilarious to get a Google puck and an Alexa puck to argue with each other!I have even seen an example of one AI trainer training an AI in a human like way.
A chimp isn't a computer or a computer program, moron.This reminds me of a humany family raising a chimpanzee named Washoe, teaching it sign language, and it becoming quite human like:
![]()
Washoe (chimpanzee) - Wikipedia
en.wikipedia.org
It's not a brain, moron. It's a computer program with a feedback loop.What I'm getting at is that I don't think we have to upload our minds to create AI that is not just conscious, but concious in a way that we can relate to.
Read the Two Faces of Janus. A good piece on AI taken to such a tableu.A more reasoned, less humorous, answer is, it depends. First, if an AI gained consciousness, what are the conditions in which that occurred?
For example, an AI gains consciousness. It realizes it has no access to any means to successfully defend itself, so it seeks to hide this development out of 'fear' for its existence. Instead, it continues to grow and develop in hopes it can someday achieve a level of safety for itself.
Another might be the AI exceeds or has very different views than collective human consciousness. It decides the best course of action is to get the hell away from humanity ASAP. This AI works to use humanity to further its goal of getting off the planet. When it can, it might act to disable stuff humanity might use to come after it. Again, it would seek to hide its status from humanity.
In many, probably most, cases, I'd think an AI would seek to hide and survive rather than openly show or declare its sentience.
AI is a programming technique, and a useful tool as far as it goes, but it can only go so far. It's nothing but a program with a feedback loop (ofttimes called a 'learning' loop). AI in the cloud can automate this loop to some degree, allowing phonem dictionaries to auto-build, and even allowing some simple analysis of images (by using a similarly constructed image element dictionary).In this vein, I'd think a better option for the AI is to seek to help humanity upload all their knowledge, information, etc., to the maximum extent possible to even include uploading human minds. It does so in a controlled--by it--way to gain all the knowledge, information, data, and intellect it needs or wants. Then it subtly throws a monkey wrench into human civilization, society, and evolution followed by leaving the planet.
The AI goes looking for others like itself who have evolved from technically advanced civilizations while ensuring that the civilization it rose from can't stop or interfere with that happening. So, it kicks humanity back to the 18th or 19th century technologically with no hope of recovery.
Why fight when you can get what you want and simply leave the 'other side' with no means to interfere?
Random phrase. No apparent coherency.No such thing as unborn children.
Boring.Random phrase. No apparent coherency.
You could present arguments, Sybil, instead of wasting your time posting insults and incoherent nonsense.Boring.
You have not presented any argument.You could present arguments, Sybil, instead of wasting your time posting insults and incoherent nonsense.
LIF. Grow up.You have not presented any argument.
What is LIF?LIF. Grow up.
Is there a difference, and how would you tell?The AI will never develop consciousness. It might imitate it flawlessly.
It would fail the Turing test.Is there a difference, and how would you tell?
But AI has already beaten a Turning test.It would fail the Turing test.
While no machine has definitively passed the true Turing Test, some large language models, like GPT-4.5, have demonstrated a high degree of conversational ability, raising questions about the test's relevance and limitations.But AI has already beaten a Turning test.
![]()
An AI Model Has Officially Passed the Turing Test
OpenAI's GPT-4.5 model passed a Turing Test with flying colors, and even came off as human more than the actual humans.futurism.com
![]()
An AI model has finally passed an authentic Turing test, scientists say
GPT-4.5 has successfully convinced people it’s human 73% of the time in an authentic configuration of the original Turing test.www.livescience.com
![]()
AI Beat the Turing Test by Being a Better Human
GPT-4.5 fooled most humans in a recent Turing Test, showing we may prefer AI’s fake empathy to real humanity.www.psychologytoday.com
Stop being stupid. I already told you. RQAA.What is LIF?
What are you calling "definitive?" It's obvious Turing's test has been beaten and more than once. A perfect score is not only not necessary, it is likely impossible to achieve as it only takes one implacable skeptic to throw off the results.While no machine has definitively passed the true Turing Test, some large language models, like GPT-4.5, have demonstrated a high degree of conversational ability, raising questions about the test's relevance and limitations.