> the guys I get are pretty good and actually learn. The model doesn't.
Core issue. LLMs never ever leave their base level unless you actively modify the prompt. I suppose you _could_ use finetuning to whip it into a useful shape, but that's a lot of work. (https://arxiv.org/pdf/2308.09895 is a good read)
But the flip side of that core issue is that if the base level is high, they're good. Which means for Python & JS, they're pretty darn good. Making pandas garbage work? Just the task for an LLM.
But yeah, R & nginx is not a major part of their original training data, and so they're stuck at "no clue, whatever stackoverflow on similar keywords said".
Core issue. LLMs never ever leave their base level unless you actively modify the prompt. I suppose you _could_ use finetuning to whip it into a useful shape, but that's a lot of work. (https://arxiv.org/pdf/2308.09895 is a good read)
But the flip side of that core issue is that if the base level is high, they're good. Which means for Python & JS, they're pretty darn good. Making pandas garbage work? Just the task for an LLM.
But yeah, R & nginx is not a major part of their original training data, and so they're stuck at "no clue, whatever stackoverflow on similar keywords said".