Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Humans assume that being able to produce meaningful language is indicative of intelligence, because the only way to do this until LLMs was through human intelligence.


Yep. Although the average human also considered proficiency in mathematics to be indicative of intelligence until we invented the pocket calculator, so maybe we're just not smart enough to define what intelligence is.


Sorry if I'm being pedantic, but I think you mean arithmetic, not mathematics in general.


Not really, we saw this decades ago: https://en.wikipedia.org/w/index.php?title=ELIZA_effect


I don't think I'm falling for the ELIZA effect.* I just feel like if you have a small enough model that can accurately handle a wide enough range of tasks, and is resistant to a wide enough range of perturbations to the input, it's simpler to assume it's doing some sort of meaningful simplification inside there. I didn't call it intelligence.

* But I guess that's what someone who's falling for the ELIZA effect would say.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: