I tried we mentions for a while, but then switched to just POSSE (publish (on your) own site, syndicate elsewhere). So the comments are moved to other platforms atm.
How does the space you are in, the environment, affect the way you think? Do you "think differently", or does your mind behave differently, when you are in a small restricted space, compared to a large and vast opening?
To me, the major issue of self-hosting (once overcome the tech barrier etc...) has always been protection. Not from external actors or attacks, but from incidents. By which I mean backups. Safest option is online backup, which is expensive and takes your data sovereignty away once again. Or I can once a year make a hard copy and take it to my parents (who live in a different country) for storage, and swap the backups out. Either way, very suboptimal. If anyone has a good way to achieve this, please lmk
> Have you experienced this kind of problem? In what tasks does it show up most for you?
I have experienced this type of problem. A colleague asked an LLM to convert a list of items in a text to a table. The model managed to skip 3 out of 7 items from the list somehow.
> Would solving it be valuable enough to pay for? Do you see this as something LLM providers will solve themselves soon, or is there room for an external solution?
The solution I have found so far is to prompt the model to write and execute code to make responses more reproducible. In that way most of the variability ends up in the code, but the code outputs tend to be more consistent, at least in my experience.
That said, I do feel like current providers will start to or are already working on this.