This website requires JavaScript.
Explore
Help
Register
Sign In
mirror
/
LocalAI
Watch
1
Star
0
Fork
0
You've already forked LocalAI
mirror of
https://github.com/mudler/LocalAI.git
synced
2026-04-25 20:49:42 -05:00
Code
Issues
Packages
Projects
Releases
Wiki
Activity
Files
7e6bf6e7a177848df28e5e0cdfb39b94a43c8c4b
LocalAI
/
backend
/
cpp
/
llama
T
History
Ettore Di Giacinto
e843d7df0e
feat(grpc): return consumed token count and update response accordingly (
#2035
)
...
Fixes
:
#1920
2024-04-15 19:47:11 +02:00
..
CMakeLists.txt
deps(llama.cpp): update, support Gemma models (
#1734
)
2024-02-21 17:23:38 +01:00
grpc-server.cpp
feat(grpc): return consumed token count and update response accordingly (
#2035
)
2024-04-15 19:47:11 +02:00
json.hpp
…
Makefile
test/fix: OSX Test Repair (
#1843
)
2024-03-18 19:19:43 +01:00
utils.hpp
feat(sycl): Add support for Intel GPUs with sycl (
#1647
) (
#1660
)
2024-02-01 19:21:52 +01:00