mirror of
https://github.com/mudler/LocalAI.git
synced 2025-12-30 14:10:24 -06:00
* feat: split remaining backends and drop embedded backends - Drop silero-vad, huggingface, and stores backend from embedded binaries - Refactor Makefile and Dockerfile to avoid building grpc backends - Drop golang code that was used to embed backends - Simplify building by using goreleaser Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * chore(gallery): be specific with llama-cpp backend templates Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * chore(docs): update Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * chore(ci): minor fixes Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * chore: drop all ffmpeg references Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fix: run protogen-go Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Always enable p2p mode Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Update gorelease file Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * fix(stores): do not always load Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Fix linting issues Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Simplify Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * Mac OS fixup Signed-off-by: Ettore Di Giacinto <mudler@localai.io> --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
20 lines
374 B
YAML
20 lines
374 B
YAML
---
|
|
name: "moondream2"
|
|
|
|
|
|
config_file: |
|
|
backend: "llama-cpp"
|
|
context_size: 2046
|
|
roles:
|
|
user: "\nQuestion: "
|
|
system: "\nSystem: "
|
|
assistant: "\nAnswer: "
|
|
stopwords:
|
|
- "Question:"
|
|
- "<|endoftext|>"
|
|
f16: true
|
|
template:
|
|
completion: |
|
|
Complete the following sentence: {{.Input}}
|
|
chat: "{{.Input}}\nAnswer:\n"
|