This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2026-01-04 09:40:32 -06:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
3295a298f4f44fb4e0aaf73c327c01dd141956aa
LocalAI
/
backend
History
Ettore Di Giacinto
c092633cd7
feat(models): add support to qwen-image (
#5975
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2025-08-06 10:36:53 +02:00
..
cpp
chore:
⬆️
Update ggml-org/llama.cpp to
fd1234cb468935ea087d6929b2487926c3afff4b
(
#5972
)
2025-08-05 23:14:43 +02:00
go
chore(stable-diffusion): bump, set GGML_MAX_NAME (
#5961
)
2025-08-03 10:47:02 +02:00
python
feat(models): add support to qwen-image (
#5975
)
2025-08-06 10:36:53 +02:00
backend.proto
feat(stablediffusion-ggml): add support to ref images (flux Kontext) (
#5935
)
2025-07-30 22:42:34 +02:00
Dockerfile.golang
fix(intel): Set GPU vendor on Intel images and cleanup (
#5945
)
2025-07-31 19:44:46 +02:00
Dockerfile.llama-cpp
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
Dockerfile.python
feat: Add backend gallery (
#5607
)
2025-06-15 14:56:52 +02:00
index.yaml
fix(backend gallery): intel images for python-based backends, re-add exllama2 (
#5928
)
2025-07-28 15:15:19 +02:00