There have been reports of AIs "hallucinating"; that's not really what's going on. But it doesn't matter what you call it, or the underlying reason for it: What's important is that it is not fixable, it is in all these models, its a critical fault, it constitutes "inherent vice" therefore in any circumstance where the problem space is not strictly bounded and checked by humans what is produced may be literal sewage with no way to be certain one way or another in advance.
The basic issue is that you cannot know that what it ingests and thus "learns" is truthful and thus its processing is poisoned by both intentional and unintentional falsehoods (whether out of ignorance or even things it ingests that were jokes), it will conflate one thing with another because it cannot discern they are not the same and thus should not be classified together and similar. All an AI is, when you get down to it, is a probability weight on a computer-driven sieve. That's it. Its a very large sieve and very fast, but that's what it is.
You cannot resolve this by, if you become aware of a factual falsehood, pointing it out to the AI because if you already know the answer you have no reason to ask the question. It is therefore the person who asks but does not know who is going to get bamboozled or worse......