This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2025-12-31 06:29:55 -06:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
9628860c0e2d24ff4183bcb75bea93b0d2a11b2c
LocalAI
/
pkg
History
Ettore Di Giacinto
9628860c0e
feat(llama.cpp/clip): inject gpu options if we detect GPUs (
#5243
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2025-04-26 00:04:47 +02:00
..
assets
…
concurrency
…
downloader
…
functions
chore(deps): update llama.cpp and sync with upstream changes (
#4950
)
2025-03-06 00:40:58 +01:00
grpc
chore: bump grpc limits to 50MB (
#5212
)
2025-04-19 08:53:24 +02:00
langchain
…
library
…
model
feat(llama.cpp/clip): inject gpu options if we detect GPUs (
#5243
)
2025-04-26 00:04:47 +02:00
oci
…
startup
…
store
…
templates
…
utils
…
xsync
…
xsysinfo
feat(llama.cpp/clip): inject gpu options if we detect GPUs (
#5243
)
2025-04-26 00:04:47 +02:00