RFK claims about vaccines fact checked by Senator Dr Bill Cassidy.

Man that one cracked me the fuck up

It took me one minute to think of that one

Thank you all my fucking funny family members for wiring my brain for comedy from birth
 
AI is reliable now. You aren't seriously one of those boomers who's afraid of technology and insists on writing a paper check to pay your phone bill, are you? AI generally does a better, more thorough, and much quicker review of extensive sources to produce a reliable response. You're thumbing through card catalogs in the MAGA idiocy section.
No, AI is not reliable. You can peruse and parse any number of articles by computer experts, etc., that show it isn't. For the most part, AI accuracy and reliability on what it says about something varies wildly from around 30 to 80% It often uses unreliable sources, and can "hallucinate" answers out of thin air.

There is demonstrated programming bias in most AI systems as well. Google AI shows a marked tendency towards the political Left in its answers, for example.

Yes, it is quicker than doing it yourself and for many people it has become a shortcut or crutch for their own inability to do research on something, particularly on the fly.

Relying on it is the idiocy.
 
I don't have any "sock accounts." I see no reason to have one or more. I don't change usernames either, another meaningless stupidity many here carry out regularly.
I'll take your word for it, although I am entirely dubious that two people on this board share your extreme opinions and a similar writing style. In any case, the "defending the TACO administration at the expense of personal credibility" positions are pretty well isolated to a certain group of people.
 
No, AI is not reliable. You can peruse and parse any number of articles by computer experts, etc., that show it isn't. For the most part, AI accuracy and reliability on what it says about something varies wildly from around 30 to 80% It often uses unreliable sources, and can "hallucinate" answers out of thin air.

There is demonstrated programming bias in most AI systems as well. Google AI shows a marked tendency towards the political Left in its answers, for example.

Yes, it is quicker than doing it yourself and for many people it has become a shortcut or crutch for their own inability to do research on something, particularly on the fly.

Relying on it is the idiocy.
AI can make mistakes. Quality varies. It should not replace research or critical thinking.

Accuracy depends on the model, the task, and the prompt quality. Your range is suspiciously vague. There isn't a single global "AI accuracy percentage".

AI models don't browse the web by default and pick random sources. They learn patterns from large datasets. They don't always know when they don't know and they may generate plausible-but-wrong answers, but that's a design limitation, not "using bad sources".

Bias is a real research topic, but the jump from "bias exists" to "therefore it's unreliable" is a leap.

"Relying is idiocy" is a big exaggeration. People rely on tools all the time. Blind trust is bad, but total dismissal is equally bad.

Your strangely forceful/emotional position seems to be driven more by skepticism and frustration than balanced analysis. AI is not fully reliable, but it's extremely useful when used critically. It should complement human thinking, not replace it.
 
AI can make mistakes. Quality varies. It should not replace research or critical thinking.

Accuracy depends on the model, the task, and the prompt quality. Your range is suspiciously vague. There isn't a single global "AI accuracy percentage".

AI models don't browse the web by default and pick random sources. They learn patterns from large datasets. They don't always know when they don't know and they may generate plausible-but-wrong answers, but that's a design limitation, not "using bad sources".

Bias is a real research topic, but the jump from "bias exists" to "therefore it's unreliable" is a leap.

"Relying is idiocy" is a big exaggeration. People rely on tools all the time. Blind trust is bad, but total dismissal is equally bad.

Your strangely forceful/emotional position seems to be driven more by skepticism and frustration than balanced analysis. AI is not fully reliable, but it's extremely useful when used critically. It should complement human thinking, not replace it.
My view is, that AI, like that of any other op ed "fact checker" out there, shouldn't be taken at their word. Their opinion--and it is opinion--is no better or worse than any other. If I toss in some hyperbole and exaggeration into the opinions I foist, I expect the reader to recognize those for what they are rather than instantly take them as intended truths. Many times, in human conversation, such use of language is a way to emphasize something or bring attention to a specific point. AI doesn't handle the nuisances of human conversation well. It responds like a small child would taking everything literally.

At least you are now admitting AI has issues and shouldn't simply be taken as accurate and correct on face value.
 
My view is, that AI, like that of any other op ed "fact checker" out there, shouldn't be taken at their word. Their opinion--and it is opinion--is no better or worse than any other. If I toss in some hyperbole and exaggeration into the opinions I foist, I expect the reader to recognize those for what they are rather than instantly take them as intended truths. Many times, in human conversation, such use of language is a way to emphasize something or bring attention to a specific point. AI doesn't handle the nuisances of human conversation well. It responds like a small child would taking everything literally.

At least you are now admitting AI has issues and shouldn't simply be taken as accurate and correct on face value.
You’re treating factual correction as opinion, but that’s not how evidence works.

If someone makes a verifiable claim about history, law, or classification, it can be checked against sources. That’s not opinion, that’s verification.
 
No, AI is not reliable. You can peruse and parse any number of articles by computer experts, etc., that show it isn't. For the most part, AI accuracy and reliability on what it says about something varies wildly from around 30 to 80% It often uses unreliable sources, and can "hallucinate" answers out of thin air.

There is demonstrated programming bias in most AI systems as well. Google AI shows a marked tendency towards the political Left in its answers, for example.

Yes, it is quicker than doing it yourself and for many people it has become a shortcut or crutch for their own inability to do research on something, particularly on the fly.

Relying on it is the idiocy.
You’re mixing together several different issues, accuracy, hallucinations, and bias, and treating them as if they invalidate any factual correction.

Yes, AI can be inaccurate. Yes, it can reflect bias in its training data. Those are real limitations.

But none of that turns verifiable facts into opinion. If someone makes a historical or legal claim, it can still be checked against primary sources.

Saying AI is unreliable doesn’t change whether the specific claims you made earlier were accurate. If you want to challenge a correction, challenge the evidence, not the existence of the tool.
 
You’re treating factual correction as opinion, but that’s not how evidence works.

Except Google AI often uses no facts in its answers.
If someone makes a verifiable claim about history, law, or classification, it can be checked against sources. That’s not opinion, that’s verification.
The difference is that Google AI takes anything less than 100% as "unverified."
 
Except Google AI often uses no facts in its answers.

The difference is that Google AI takes anything less than 100% as "unverified."
Saying Google AI is imperfect doesn’t change whether the specific claims you made were accurate.

Unverified doesn’t mean under 100%. It means the system couldn’t find enough consistent, independent sources to confirm a claim.

If you think the correction is wrong, the way to show that is by presenting historical sources that classify Islamist terrorists, Serbian nationalists, Ba’athists, anarchists, or lone‑actor shooters as left.

Attacking the tool doesn’t address the substance.
 
He was doing what Fox "News" does -- pointing out only those parts that confirmed his bias and deliberately avoiding those parts that negated his narrative.

Doesn'ti t bother you that people listen to this rubbish and avoid immunizations because of it? Does it matter if a few kids die and others are permanently damaged because they weren't protected from preventable diseases?
#1 pointing out only those parts that confirmed his bias and deliberately avoiding those parts that negated his narrative.
CNN and all the Leftist slanted media have been doing this for decades
#2 Does it matter if a few kids die and others are permanently damaged because they weren't protected from preventable diseases?
Stop being hysterical.
 
Back
Top