mirror of
https://github.com/mudler/LocalAI.git
synced 2026-01-05 01:59:53 -06:00
When compiling the single-binary on Apple, we enforce BUILD_TYPE=metal, however, we want still to have the fallback vanilla such as if llama.cpp fails to load metal (e.g. if Acceleration framework is missing, or MacOS version is too old) we can still run by offloading to the CPU. The default backend is still using metal as usual. Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
36 KiB
36 KiB