The patterns we see(k)
One of the criticisms of the AI models ( LLMs) of today is the problem of hallucinations. Which apparently really boils down to the models seeing spurious correlations and patterns where they don't exist and "filling in" when they aren't sure or don't have a certain answer.
But is the human mind really different? Are we that immune to hallucinations?
Not in the literal or schizophrenic sense of the word. But in seeing patterns where none exists?
Unlike the AI hallucinations , the patterns that the human mind sees have some in built biases. We are simply prone to seeing patterns we want to see. The AIs do not , atleast, as yet have a want function!
This is especially true if you are the sort of person who like yours truly at some level, believes that the world is conspiring to help you ( rational mind: yeah right!), eventually helping make sense of it all and who thinks everything happens for a reason. After all, seeing or seeking patterns is a good and a easy way to make sense of it all , isn't it?
But what about the patterns that you even the rational mind can justify with some mental probabilistic jugglery. You know, you assume that apriori , the probability of something X is quite low. Then you see patterns Y which themselves are low probability . You know that if X happens to be true, then you can explain Y. 1. Does the fact that Y is low probability mean that X is a good explanation for Y and is hence true? 2. Or can Y itself simply be explained by randomness and coincidences ? 3. Or does the fact that the apriori probability of X being low is an independent fact in itself , should be the guiding factor in your assessment of X? After all, the good explanation for something else low probability Y might improve the odds for X but not really change it to the point of being likely?
If you are an incorrigible optimist, then you probably go with 1. If you are a rational sceptic, you go with 2. If you are an optimist but at the same time not incorrigibly deluded, then you go with 3.
But now, what if you could anticipate patterns? That you can extrapolate from the Y so far, a Z, to happen in the short term, which is more likely to happen if the X is true? Does it change things? If you went with explanations 1 or 2, then probably not all that much.
If you went with a 3, then just may be a little more likely? I guess?
Maybe we aren't just seeing patterns, we are seeking them. Even better if that pattern helps us in that damned thing called validation!
Comments
Post a Comment