Richard Palethorpe
bf6426aef2
feat: Realtime API support reboot ( #5392 )
...
* feat(realtime): Initial Realtime API implementation
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* chore: go mod tidy
Signed-off-by: Richard Palethorpe <io@richiejp.com >
* feat: Implement transcription only mode for realtime API
Reduce the scope of the real time API for the initial realease and make
transcription only mode functional.
Signed-off-by: Richard Palethorpe <io@richiejp.com >
* chore(build): Build backends on a separate layer to speed up core only changes
Signed-off-by: Richard Palethorpe <io@richiejp.com >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
Signed-off-by: Richard Palethorpe <io@richiejp.com >
Co-authored-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-25 22:25:05 +02:00
LocalAI [bot]
4a91950848
chore: ⬆️ Update ggml-org/llama.cpp to d13d0f6135803822ec1cd7e3efb49360b88a1bdf ( #5448 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-24 08:50:41 +02:00
LocalAI [bot]
4614ea1685
chore: ⬆️ Update ggml-org/whisper.cpp to 13d92d08ae26031545921243256aaaf0ee057943 ( #5449 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-23 23:44:06 +00:00
LocalAI [bot]
c62f2bb336
chore: ⬆️ Update ggml-org/llama.cpp to 8a1d206f1d2b4e45918b589f3165b4be232f7ba8 ( #5440 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-23 09:22:38 +02:00
LocalAI [bot]
38aeca6f9c
chore: ⬆️ Update ggml-org/whisper.cpp to 78b31ca7824500e429ba026c1a9b48e0b41c50cb ( #5439 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-23 06:22:41 +00:00
LocalAI [bot]
bac3022044
chore: ⬆️ Update ggml-org/whisper.cpp to bd1cb0c8e3a04baa411dc12c1325b6a9f12ee7f4 ( #5424 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-05-22 21:49:06 +02:00
LocalAI [bot]
cd41701524
chore: ⬆️ Update ggml-org/llama.cpp to 8e186ef0e764c7a620e402d1f76ebad60bf31c49 ( #5423 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-05-22 21:48:51 +02:00
LocalAI [bot]
ef6fc052eb
chore: ⬆️ Update ggml-org/llama.cpp to b7a17463ec190aeee7b9077c606c910fb4688b84 ( #5399 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-21 09:06:09 +02:00
LocalAI [bot]
7ff35c08ac
chore: ⬆️ Update ggml-org/whisper.cpp to 62dc8f7d7b72ca8e75c57cd6a100712c631fa5d5 ( #5398 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-21 09:00:42 +02:00
LocalAI [bot]
456b4982ef
chore: ⬆️ Update ggml-org/llama.cpp to 6a2bc8bfb7cd502e5ebc72e36c97a6f848c21c2c ( #5390 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-19 01:25:22 +00:00
LocalAI [bot]
cfc73c7773
chore: ⬆️ Update ggml-org/llama.cpp to e3a7cf6c5bf6a0a24217f88607b06e4405a2b5d9 ( #5384 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-18 01:21:13 +00:00
Ettore Di Giacinto
6d5bde860b
feat(llama.cpp): upgrade and use libmtmd ( #5379 )
...
* WIP
* wip
* wip
* Make it compile
* Update json.hpp
* this shouldn't be private for now
* Add logs
* Reset auto detected template
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Re-enable grammars
* This seems to be broken - 360a9c98e1 (diff-a18a8e64e12a01167d8e98fc) […]cccf0d4eed09d76d879L2998-L3207
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Placeholder
* Simplify image loading
* use completion type
* disable streaming
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* correctly return timings
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Remove some debug logging
* Adapt tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Keep header
* embedding: do not use oai type
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Sync from server.cpp
* Use utils and json directly from llama.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Sync with upstream
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix: copy json.hpp from the correct location
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix: add httplib
* sync llama.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Embeddiongs: set OAICOMPAT_TYPE_EMBEDDING
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* feat: sync with server.cpp by including it
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* make it darwin-compatible
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-17 16:02:53 +02:00
LocalAI [bot]
6ef383033b
chore: ⬆️ Update ggml-org/whisper.cpp to d1f114da61b1ae1e70b03104fad42c9dd666feeb ( #5381 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-17 00:35:17 +00:00
LocalAI [bot]
3033845f94
chore: ⬆️ Update ggml-org/whisper.cpp to 20a20decd94badfd519a07ea91f0bba8b8fc4dea ( #5374 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-16 12:46:16 +02:00
LocalAI [bot]
30704292de
chore: ⬆️ Update ggml-org/whisper.cpp to f389d7e3e56bbbfec49fd333551927a0fcbb7213 ( #5367 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-15 00:34:16 +00:00
LocalAI [bot]
cb28aef93b
chore: ⬆️ Update ggml-org/whisper.cpp to f89056057511a1657af90bb28ef3f21e5b1f33cd ( #5364 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-14 09:24:16 +02:00
LocalAI [bot]
6adb019f8f
chore: ⬆️ Update ggml-org/llama.cpp to de4c07f93783a1a96456a44dc16b9db538ee1618 ( #5358 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-13 22:00:19 +02:00
LocalAI [bot]
fcaa0a2f01
chore: ⬆️ Update ggml-org/whisper.cpp to e41bc5c61ae66af6be2bd7011769bb821a83e8ae ( #5357 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-13 21:59:50 +02:00
LocalAI [bot]
63f7c86c4d
chore: ⬆️ Update ggml-org/llama.cpp to 9a390c4829cd3058d26a2e2c09d16e3fd12bf1b1 ( #5351 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-12 09:24:54 +02:00
LocalAI [bot]
ac89bf77bf
chore: ⬆️ Update ggml-org/whisper.cpp to 2e310b841e0b4e7cf00890b53411dd9f8578f243 ( #4785 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-12 01:30:35 +00:00
LocalAI [bot]
2612a0c910
chore: ⬆️ Update ggml-org/llama.cpp to 15e6125a397f6086c1dfdf7584acdb7c730313dc ( #5345 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-11 09:21:46 +02:00
Ettore Di Giacinto
6978eec69f
feat(whisper.cpp): gpu support ( #5344 )
...
* fix(whisper.cpp): gpu support
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Try to fix apple tests
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-10 22:02:40 +02:00
LocalAI [bot]
2fcfe54466
chore: ⬆️ Update ggml-org/llama.cpp to 33eff4024084d1f0c8441b79f7208a52fad79858 ( #5343 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-10 10:07:39 +02:00
Ettore Di Giacinto
4e7506a3be
fix(whisper): add vulkan flag
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-05-10 08:46:21 +02:00
Ettore Di Giacinto
2a46217f90
Update Makefile
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-05-09 23:17:18 +02:00
Ettore Di Giacinto
31ff9dbd52
chore(Makefile): small cleanups, disable openmp on whisper
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-09 22:37:18 +02:00
Ettore Di Giacinto
9483abef03
fix(whisper/sycl): disable
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-09 22:36:09 +02:00
Ettore Di Giacinto
ce3e8b3e31
fix(whisper/sycl): use icx when running go build
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-09 21:48:09 +02:00
Ettore Di Giacinto
f3bb84c9a7
feat(whisper): link vulkan, hipblas and sycl
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-09 19:25:26 +02:00
Ettore Di Giacinto
ecb1297582
fix: specify icx and icpx only on whisper.cpp
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-09 10:58:30 +02:00
Ettore Di Giacinto
73fc702b3c
fix: this is not needed
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-09 10:28:53 +02:00
Ettore Di Giacinto
e3af62ae1a
feat: Add sycl support for whisper.cpp ( #5341 )
2025-05-09 09:31:02 +02:00
Ettore Di Giacinto
dc21604741
chore(deps): bump whisper.cpp ( #5338 )
...
* chore(deps): bump whisper.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* add libggml-metal
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fixups macOS arm64
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* adjust cublas for whisper.cpp
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-09 08:17:45 +02:00
LocalAI [bot]
5433f1a70e
chore: ⬆️ Update ggml-org/llama.cpp to f05a6d71a0f3dbf0730b56a1abbad41c0f42e63d ( #5340 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-08 23:13:28 +00:00
LocalAI [bot]
e90f2cb0ca
chore: ⬆️ Update ggml-org/llama.cpp to 814f795e063c257f33b921eab4073484238a151a ( #5331 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-08 09:25:13 +02:00
LocalAI [bot]
a86e8c78f1
chore: ⬆️ Update ggml-org/llama.cpp to 91a86a6f354aa73a7aab7bc3d283be410fdc93a5 ( #5329 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-06 23:39:10 +00:00
Ettore Di Giacinto
adb24214c6
chore(deps): bump llama.cpp to b34c859146630dff136943abc9852ca173a7c9d6 ( #5323 )
...
chore(deps): bump llama.cpp to 'b34c859146630dff136943abc9852ca173a7c9d6'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-06 11:21:25 +02:00
LocalAI [bot]
a085f61fdc
chore: ⬆️ Update ggml-org/llama.cpp to 9fdfcdaeddd1ef57c6d041b89cd8fb7048a0f028 ( #5316 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-04 23:00:25 +00:00
Ettore Di Giacinto
21bdfe5fa4
fix: use rice when embedding large binaries ( #5309 )
...
* fix(embed): use go-rice for large backend assets
Golang embed FS has a hard limit that we might exceed when providing
many binary alternatives.
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* simplify golang deps
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* chore(tests): switch to testcontainers and print logs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix(tests): do not build a test binary
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* small fixup
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-04 16:42:42 +02:00
LocalAI [bot]
01bbb31fb3
chore: ⬆️ Update ggml-org/llama.cpp to 36667c8edcded08063ed51c7d57e9e086bbfc903 ( #5300 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-04 09:23:01 +02:00
Ettore Di Giacinto
1fc6d469ac
chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194 ( #5307 )
...
chore(deps): bump llama.cpp to '1d36b3670b285e69e58b9d687c770a2a0a192194'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-05-03 18:44:40 +02:00
LocalAI [bot]
573a3f104c
chore: ⬆️ Update ggml-org/llama.cpp to d7a14c42a1883a34a6553cbfe30da1e1b84dfd6a ( #5292 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-02 09:21:38 +02:00
LocalAI [bot]
58c9ade2e8
chore: ⬆️ Update ggml-org/llama.cpp to 3e168bede4d27b35656ab8026015b87659ecbec2 ( #5284 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-05-01 10:01:39 +02:00
Wyatt Neal
4076ea0494
fix: vllm missing logprobs ( #5279 )
...
* working to address missing items
referencing #3436 , #2930 - if i could test it, this might show that the
output from the vllm backend is processed and returned to the user
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com >
* adding in vllm tests to test-extras
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com >
* adding in tests to pipeline for execution
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com >
* removing todo block, test via pipeline
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com >
---------
Signed-off-by: Wyatt Neal <wyatt.neal+git@gmail.com >
2025-04-30 12:55:07 +00:00
LocalAI [bot]
2b2d907a3a
chore: ⬆️ Update ggml-org/llama.cpp to e2e1ddb93a01ce282e304431b37e60b3cddb6114 ( #5278 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-04-29 21:46:08 +00:00
LocalAI [bot]
bc1e059259
chore: ⬆️ Update ggml-org/llama.cpp to 5f5e39e1ba5dbea814e41f2a15e035d749a520bc ( #5267 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-04-29 09:49:42 +02:00
LocalAI [bot]
23f347e687
chore: ⬆️ Update ggml-org/llama.cpp to ced44be34290fab450f8344efa047d8a08e723b4 ( #5258 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-04-27 21:59:35 +00:00
LocalAI [bot]
2a92effc5d
chore: ⬆️ Update ggml-org/llama.cpp to 77d5e9a76a7b4a8a7c5bf9cf6ebef91860123cba ( #5254 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-04-27 09:21:02 +02:00
LocalAI [bot]
d66396201a
chore: ⬆️ Update ggml-org/llama.cpp to 295354ea6848a77bdee204ee1c971d9b92ffcca9 ( #5245 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-04-26 00:05:16 +02:00
LocalAI [bot]
701cd6b6d5
chore: ⬆️ Update ggml-org/llama.cpp to 226251ed56b85190e18a1cca963c45b888f4953c ( #5240 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-04-25 08:42:22 +02:00