Logo
Explore Help
Register Sign In
mirror/LocalAI
1
0
Fork 0
You've already forked LocalAI
mirror of https://github.com/mudler/LocalAI.git synced 2026-01-04 17:50:13 -06:00
Code Issues Packages Projects Releases Wiki Activity
Files
9f2c9cd6911b4e83d3f41a4588820efea1f6e0a1
LocalAI/backend
History
Ettore Di Giacinto 9f2c9cd691 feat(llama.cpp): Add gfx1201 support (#6125)
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
2025-08-23 23:06:01 +02:00
..
cpp
feat(llama.cpp): Add gfx1201 support (#6125)
2025-08-23 23:06:01 +02:00
go
chore: ⬆️ Update ggml-org/whisper.cpp to fc45bb86251f774ef817e89878bb4c2636c8a58f (#6089)
2025-08-19 08:10:25 +02:00
python
Add mlx-vlm (#6119)
2025-08-23 23:05:30 +02:00
backend.proto
feat(stablediffusion-ggml): add support to ref images (flux Kontext) (#5935)
2025-07-30 22:42:34 +02:00
Dockerfile.golang
fix(intel): Set GPU vendor on Intel images and cleanup (#5945)
2025-07-31 19:44:46 +02:00
Dockerfile.llama-cpp
feat: do not bundle llama-cpp anymore (#5790)
2025-07-18 13:24:12 +02:00
Dockerfile.python
feat: bundle python inside backends (#6123)
2025-08-23 22:36:39 +02:00
index.yaml
Add mlx-vlm (#6119)
2025-08-23 23:05:30 +02:00
Powered by Gitea Version: 1.25.3 Page: 1390ms Template: 54ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API