AI doesn't have a base understanding of how physics work. So they think it's acceptible if in a video some element on the background in a next frame might appear in front of another element that is on the foreground.
So it's always necessary to keep correcting LLMs, because they only learn by example, and you can't express any possible outcome of any physical process just by example, because physical processes can be in infinate variations. LLMs can keep getting closer to match our physical reality, but when you zoom into the details you'll always find that it comes short.
So you can never really trust an LLM. If we want to make an AI that doesn't make errors, it should understand how physics works.
I dont think the errors really are all that different. Ever since GPT-3.5 came out Ive been thinking that the errors were ones a human could have made under a similar context.
>LLMs can keep getting closer to match our physical reality, but when you zoom into the details you'll always find that it comes short.
Like humans.
>So you can never really trust an LLM.
Cant really trust a human either. That's why we set up elaborate human systems (science, checks and balances in government, law, freedom of speech, markets) to mitigate our constant tendency to be complete fuck ups. We hallucinate science that does not exist, lies to maintain our worldview, jump to conclusions about guilt, build businesses based upon bad beliefs, etc.
>If we want to make an AI that doesn't make errors, it should understand how physics works
An AI that doesnt make errors wouldnt be AGI it would be a godlike superintelligence. I dont think thats even feasible. I think a propensity to make errors is intrinsic to how intelligence functions.
Physics is just one domain that they work in and Im pretty sure some of them already do have varying understandings of physics.
But if you ask a human to draw / illustrate a physical setting, they would never draw something that is physically impossible, because it's obvious to a human.
Of course we make all kinds of little mistakes, but at least we can see that they are mistakes. An LLM can't see it's own mistakes, it needs to be corrected by a human.
> Physics is just one domain that they work in and Im pretty sure some of them already do have varying understandings of physics.
Yeah but that would then not be al LLM or machine learned thing. We would program it so that it understands the rules of physics, and then it can interpret things based on those rules. But that is a totally different kind of AI, or rather a true AI instead of a next-word predictor that looks like an AI. But the development of such AIs goes a lot slower because you can't just keep training it, you actually have to program it. But LLMs can actually help program it ;). Although LLMs are mostly good at currently existing technologies and not necessarily new ones.
To be clear, I'm not saying that LLM's exclusively make non-human errors. I'm more saying that most errors are happening for different "reasons" than humans.
Think about the strawberry example. I've seen a lot of articles lately where not all misspellings of the word "strawberry" reliably give letter counting errors. The general sentiment there is human, but the specific pattern of misspelling is really more unique to LLM's (i.e. different spelling errors would impact humans versus LLM's).
The part that makes it challenging is that we don't know these "triggers." You could have a prompt that has 95% accuracy, but that inexplicably drops to 50% if the word "green" is in the question (or something like that).
when i try to remember something my brain often synthesizes new things by filling in the gaps.
This would be where I often say "i might be imagining it, but..." or "i could have sworn there was a..."
In such cases the thing that saves the human brain is double checking against reality (e.g. googling it to make sure).
Miscounting the number of r's in strawberry by glancing at the word also seems like a pretty human mistake.