AI can make mistakes. Quality varies. It should not replace research or critical thinking.
Accuracy depends on the model, the task, and the prompt quality. Your range is suspiciously vague. There isn't a single global "AI accuracy percentage".
AI models don't browse the web by default and pick random sources. They learn patterns from large datasets. They don't always know when they don't know and they may generate plausible-but-wrong answers, but that's a design limitation, not "using bad sources".
Bias is a real research topic, but the jump from "bias exists" to "therefore it's unreliable" is a leap.
"Relying is idiocy" is a big exaggeration. People rely on tools all the time. Blind trust is bad, but total dismissal is equally bad.
Your strangely forceful/emotional position seems to be driven more by skepticism and frustration than balanced analysis. AI is not fully reliable, but it's extremely useful when used a critically. It should complement human thinking, not replace it.