This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2025-12-31 06:29:55 -06:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
18fcd8557cf394780cfeae11afe064aa187e449a
LocalAI
/
backend
/
cpp
/
llama-cpp
History
Ettore Di Giacinto
18fcd8557c
fix(llama.cpp): support gfx1200 (
#6045
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@localai.io
>
2025-08-12 22:04:30 +02:00
..
patches
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
CMakeLists.txt
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
grpc-server.cpp
fix(llama.cpp): do not default to linear rope (
#5982
)
2025-08-06 23:20:28 +02:00
Makefile
fix(llama.cpp): support gfx1200 (
#6045
)
2025-08-12 22:04:30 +02:00
package.sh
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
prepare.sh
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
run.sh
feat: refactor build process, drop embedded backends (
#5875
)
2025-07-22 16:31:04 +02:00