Logo
Explore Help
Register Sign In
mirror/LocalAI
1
0
Fork 0
You've already forked LocalAI
mirror of https://github.com/mudler/LocalAI.git synced 2025-12-31 06:29:55 -06:00
Code Issues Packages Projects Releases Wiki Activity
Files
2e4dc6456f545561ea2dce98b8c1bc77006edcc8
LocalAI/backend
History
LocalAI [bot] 2e4dc6456f chore: ⬆️ Update ggml-org/llama.cpp to fb22dd07a639e81c7415e30b146f545f1a2f2caf (#6112)
⬆️ Update ggml-org/llama.cpp

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-20 09:01:36 +02:00
..
cpp
chore: ⬆️ Update ggml-org/llama.cpp to fb22dd07a639e81c7415e30b146f545f1a2f2caf (#6112)
2025-08-20 09:01:36 +02:00
go
chore: ⬆️ Update ggml-org/whisper.cpp to fc45bb86251f774ef817e89878bb4c2636c8a58f (#6089)
2025-08-19 08:10:25 +02:00
python
Revert "chore(deps): bump transformers from 4.48.3 to 4.55.2 in /backend/python/coqui" (#6105)
2025-08-19 15:00:33 +02:00
backend.proto
feat(stablediffusion-ggml): add support to ref images (flux Kontext) (#5935)
2025-07-30 22:42:34 +02:00
Dockerfile.golang
fix(intel): Set GPU vendor on Intel images and cleanup (#5945)
2025-07-31 19:44:46 +02:00
Dockerfile.llama-cpp
feat: do not bundle llama-cpp anymore (#5790)
2025-07-18 13:24:12 +02:00
Dockerfile.python
feat: Add backend gallery (#5607)
2025-06-15 14:56:52 +02:00
index.yaml
feat(diffusers): add builds for nvidia-l4t (#6004)
2025-08-08 22:48:38 +02:00
Powered by Gitea Version: 1.25.3 Page: 1388ms Template: 19ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API