This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2025-12-30 22:20:20 -06:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
9f2c9cd6911b4e83d3f41a4588820efea1f6e0a1
LocalAI
/
backend
/
cpp
/
llama-cpp
History
Ettore Di Giacinto
9f2c9cd691
feat(llama.cpp): Add gfx1201 support (
#6125
)
...
Signed-off-by: Ettore Di Giacinto <
mudler@users.noreply.github.com
>
2025-08-23 23:06:01 +02:00
..
patches
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
CMakeLists.txt
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
grpc-server.cpp
chore(deps): bump llama.cpp to '45363632cbd593537d541e81b600242e0b3d47fc' (
#6122
)
2025-08-23 08:39:10 +02:00
Makefile
feat(llama.cpp): Add gfx1201 support (
#6125
)
2025-08-23 23:06:01 +02:00
package.sh
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
prepare.sh
feat: do not bundle llama-cpp anymore (
#5790
)
2025-07-18 13:24:12 +02:00
run.sh
fix(llama-cpp/darwin): make sure to bundle
libutf8
libs (
#6060
)
2025-08-14 17:56:35 +02:00