Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, they read your code in the actual hiring loop.




My point still stands. I don't know what the LLM is doing so my guess is it's cheating unless there is evidence to the contrary.

I guess your answer to "Try to run Claude Code on your own 'ill-defined' problem" would be "I'm not interested." Correct? I think we can stop here then.

Well that's certainly a challenge when you use LLMs for this test driven style of programming.

Why do you assume it’s cheating?

Because it's a well know failure mode of neural networks & scalar valued optimization problems in general: https://www.nature.com/articles/s42256-020-00257-z

Again, you can just read the code

And? Anthropic is not aware of this 2020 paper? The problem is not solvable?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: