Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is just wrong. Ollama has moved off of llama.cpp and is working with hardware partners to support GGML. https://ollama.com/blog/multimodal-models



we keep it for backwards compatibility - all the newer models are implemented inside Ollama directly


can you substantiate this more? llama.ccp.is also relying on ggml




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: