Saturday, January 17, 2026

AI And Related: Short Them All. They're ZEROS in [Market-Ticker-Nad]

 There have been reports of AIs "hallucinating"; that's not really what's going on.  But it doesn't matter what you call it, or the underlying reason for it: What's important is that it is not fixable, it is in all these models, its a critical fault, it constitutes "inherent vice" therefore in any circumstance where the problem space is not strictly bounded and checked by humans what is produced may be literal sewage with no way to be certain one way or another in advance.

The basic issue is that you cannot know that what it ingests and thus "learns" is truthful and thus its processing is poisoned by both intentional and unintentional falsehoods (whether out of ignorance or even things it ingests that were jokes), it will conflate one thing with another because it cannot discern they are not the same and thus should not be classified together and similar.  All an AI is, when you get down to it, is a probability weight on a computer-driven sieve.  That's it.  Its a very large sieve and very fast, but that's what it is.

You cannot resolve this by, if you become aware of a factual falsehood, pointing it out to the AI because if you already know the answer you have no reason to ask the question.  It is therefore the person who asks but does not know who is going to get bamboozled or worse......

https://market-ticker.org/akcs-www?post=254785 

....That's a very small example but a really serious one because there are all manner of such "inventions" that are liability-generating claims if propagated or relied upon.  What if the AI claims someone is a felon.  What if the AI claims some credential or experience which is in fact germane to what you intend to do and upon which you rely, you take that action and its false?  Let me be clear: The amount of conflation and outright invention of alleged facts which are not in fact true (or even worse for which there is no evidence of any sort in either direction!) is so pervasive not just in my experience but also as others have experienced that if you rely on said AI in a way that leads to liability or even injury you are going to get a dick so far up your**** your tonsils will be tickled.  If, for example, you use this sort of technology to make a health decision I hope you are right with God because you've got a decent chance of meeting him sooner than later as a direct result of your stupid reliance whether personally or through some alleged "professional" such as a physician.