LocalAI [bot]
0f5cc4c07b
chore: ⬆️ Update ggml-org/llama.cpp to 5c8a717128cc98aa9e5b1c44652f5cf458fd426e ( #7573 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-14 22:21:54 +01:00
LocalAI [bot]
3e4e6777d8
chore: ⬆️ Update ggml-org/llama.cpp to 5266379bcae74214af397f36aa81b2a08b15d545 ( #7563 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-14 11:41:10 +01:00
Simon Redman
5de539ab07
fix(7355): Update llama-cpp grpc for v3 interface ( #7566 )
...
* fix(7355): Update llama-cpp grpc for v3 interface
Signed-off-by: Simon Redman <simon@ergotech.com >
* feat(llama-gprc): Trim whitespace from servers list
Signed-off-by: Simon Redman <simon@ergotech.com >
* Trim trailing spaces in grpc-server.cpp
Signed-off-by: Simon Redman <simon@ergotech.com >
---------
Signed-off-by: Simon Redman <simon@ergotech.com >
2025-12-14 11:40:33 +01:00
LocalAI [bot]
3013d1c7b5
chore: ⬆️ Update leejet/stable-diffusion.cpp to 43a70e819b9254dee0d017305d6992f6bb27f850 ( #7562 )
...
⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-13 22:52:20 +01:00
LocalAI [bot]
073b3855d9
chore: ⬆️ Update ggml-org/whisper.cpp to 2551e4ce98db69027d08bd99bcc3f1a4e2ad2cef ( #7561 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-13 21:22:14 +00:00
Ettore Di Giacinto
7790a24682
Revert "chore(deps): bump torch from 2.5.1+cxx11.abi to 2.7.1+cpu in /backend/python/diffusers in the pip group across 1 directory" ( #7558 )
...
Revert "chore(deps): bump torch from 2.5.1+cxx11.abi to 2.7.1+cpu in /backend…"
This reverts commit 1b4aa6f1be .
2025-12-13 17:04:46 +01:00
dependabot[bot]
1b4aa6f1be
chore(deps): bump torch from 2.5.1+cxx11.abi to 2.7.1+cpu in /backend/python/diffusers in the pip group across 1 directory ( #7549 )
...
chore(deps): bump torch
Bumps the pip group with 1 update in the /backend/python/diffusers directory: torch.
Updates `torch` from 2.5.1+cxx11.abi to 2.7.1+cpu
---
updated-dependencies:
- dependency-name: torch
dependency-version: 2.7.1+cpu
dependency-type: direct:production
dependency-group: pip
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-13 13:12:18 +00:00
Ettore Di Giacinto
504d954aea
Add chardet to requirements-l4t13.txt
...
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-12-13 12:59:03 +01:00
Ettore Di Giacinto
6d2a535813
chore(l4t13): use pytorch index ( #7546 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-13 10:04:57 +01:00
Ettore Di Giacinto
abfb0ff8fe
feat(stablediffusion-ggml): add lora support ( #7542 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-13 08:29:06 +01:00
LocalAI [bot]
2bd6faaff5
chore: ⬆️ Update leejet/stable-diffusion.cpp to 11ab095230b2b67210f5da4d901588d56c71fe3a ( #7539 )
...
⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-12 21:31:13 +00:00
Ettore Di Giacinto
0b130fb811
fix(llama.cpp): handle corner cases with tool array content ( #7528 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-12 08:15:45 +01:00
LocalAI [bot]
0771a2d3ec
chore: ⬆️ Update ggml-org/llama.cpp to a81a569577cc38b32558958b048228150be63eae ( #7529 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-11 21:55:44 +00:00
Ettore Di Giacinto
8442f33712
chore(deps): bump stable-diffusion.cpp to '8823dc48bcc1598eb9671da7b69e45338d0cc5a5' ( #7524 )
...
* chore(deps): bump stable-diffusion.cpp to '8823dc48bcc1598eb9671da7b69e45338d0cc5a5'
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix(Dockerfile.golang): Make curl noisy to see when download fails
Signed-off-by: Richard Palethorpe <io@richiejp.com >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
Signed-off-by: Richard Palethorpe <io@richiejp.com >
Co-authored-by: Richard Palethorpe <io@richiejp.com >
2025-12-11 20:32:25 +01:00
LocalAI [bot]
72621a1d1c
chore: ⬆️ Update ggml-org/llama.cpp to 4dff236a522bd0ed949331d6cb1ee2a1b3615c35 ( #7508 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-11 08:15:38 +01:00
LocalAI [bot]
e1d060d147
chore: ⬆️ Update ggml-org/whisper.cpp to 9f5ed26e43c680bece09df7bdc8c1b7835f0e537 ( #7509 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-10 23:09:13 +01:00
Ettore Di Giacinto
32dcb58e89
feat(vibevoice): add new backend ( #7494 )
...
* feat(vibevoice): add backend
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* chore: add workflow and backend index
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* chore(gallery): add vibevoice
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Use self-hosted for intel builds
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Pin python version for l4t
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-10 21:14:21 +01:00
LocalAI [bot]
ef44ace73f
chore: ⬆️ Update ggml-org/llama.cpp to 086a63e3a5d2dbbb7183a74db453459e544eb55a ( #7496 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-10 12:05:13 +01:00
Ettore Di Giacinto
74ee1463fe
chore(deps/llama-cpp): bump to '2fa51c19b028180b35d316e9ed06f5f0f7ada2c1' ( #7484 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-09 15:41:37 +01:00
LocalAI [bot]
6c7b215687
chore: ⬆️ Update ggml-org/whisper.cpp to a8f45ab11d6731e591ae3d0230be3fec6c2efc91 ( #7483 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-09 08:33:30 +01:00
dependabot[bot]
bbce461f57
chore(deps): bump protobuf from 6.33.1 to 6.33.2 in /backend/python/transformers ( #7481 )
...
chore(deps): bump protobuf in /backend/python/transformers
Bumps [protobuf](https://github.com/protocolbuffers/protobuf ) from 6.33.1 to 6.33.2.
- [Release notes](https://github.com/protocolbuffers/protobuf/releases )
- [Commits](https://github.com/protocolbuffers/protobuf/commits )
---
updated-dependencies:
- dependency-name: protobuf
dependency-version: 6.33.2
dependency-type: direct:production
update-type: version-update:semver-patch
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-08 22:13:18 +01:00
LocalAI [bot]
5610384d8a
chore: ⬆️ Update ggml-org/llama.cpp to db97837385edfbc772230debbd49e5efae843a71 ( #7447 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-07 08:32:35 +01:00
LocalAI [bot]
c3493e4917
chore: ⬆️ Update ggml-org/whisper.cpp to a88b93f85f08fc6045e5d8a8c3f94b7be0ac8bce ( #7448 )
...
⬆️ Update ggml-org/whisper.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-06 21:26:25 +00:00
LocalAI [bot]
edf7141b9b
chore: ⬆️ Update ggml-org/llama.cpp to 8160b38a5fa8a25490ca33ffdd200cda51405688 ( #7438 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-06 13:35:24 +01:00
Ettore Di Giacinto
024aa6a55b
chore(deps): bump llama.cpp to 'bde188d60f58012ada0725c6dd5ba7c69fe4dd87' ( #7434 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-05 00:17:35 +01:00
Copilot
1abbedd732
feat(diffusers): implement dynamic pipeline loader to remove per-pipeline conditionals ( #7365 )
...
* Initial plan
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add dynamic loader for diffusers pipelines and refactor backend.py
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fix pipeline discovery error handling and test mock issue
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Address code review feedback: direct imports, better error handling, improved tests
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Address remaining code review feedback: specific exceptions, registry access, test imports
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add defensive fallback for DiffusionPipeline registry access
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Actually use dynamic pipeline loading for all pipelines in backend
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Use dynamic loader consistently for all pipelines including AutoPipelineForText2Image
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Move dynamic loader tests into test.py for CI compatibility
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Extend dynamic loader to discover any diffusers class type, not just DiffusionPipeline
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add AutoPipeline classes to pipeline registry for default model loading
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fix(python): set pyvenv python home
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* do pyenv update during start
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Minor changes
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com >
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@localai.io >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-12-04 19:02:06 +01:00
Richard Palethorpe
c2e4a1f29b
feat(stablediffusion): Passthrough more parameters to support z-image and flux2 ( #7419 )
...
* feat(stablediffusion): Passthrough more parameters to support z-image and flux2
Signed-off-by: Richard Palethorpe <io@richiejp.com >
* chore(z-image): Add Z-Image-Turbo GGML to library
Signed-off-by: Richard Palethorpe <io@richiejp.com >
* fix(stablediffusion-ggml): flush stderr and check errors when writing PNG
Signed-off-by: Richard Palethorpe <io@richiejp.com >
* fix(stablediffusion-ggml): Re-allocate Go strings in C++
Signed-off-by: Richard Palethorpe <io@richiejp.com >
* fix(stablediffusion-ggml): Try to avoid segfaults
Signed-off-by: Richard Palethorpe <io@richiejp.com >
* fix(stablediffusion-ggml): Init sample and easycache params
Signed-off-by: Richard Palethorpe <io@richiejp.com >
---------
Signed-off-by: Richard Palethorpe <io@richiejp.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-12-04 17:08:21 +01:00
LocalAI [bot]
ca2e878aaf
chore: ⬆️ Update ggml-org/llama.cpp to e9f9483464e6f01d843d7f0293bd9c7bc6b2221c ( #7421 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-12-04 11:54:01 +01:00
LocalAI [bot]
7c5a0cde64
chore: ⬆️ Update leejet/stable-diffusion.cpp to 5865b5e7034801af1a288a9584631730b25272c6 ( #7422 )
...
⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-12-04 11:29:16 +01:00
Ettore Di Giacinto
edcbf82b31
chore(ci): add wget
2025-12-04 10:01:34 +01:00
Ettore Di Giacinto
6558caca85
chore(ci): adapt also golang-based backends docker images
2025-12-04 09:14:08 +01:00
Ettore Di Giacinto
b4172762d7
chore(ci): do override pip in 24.04
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-03 22:54:13 +01:00
Ettore Di Giacinto
dc6182bbb1
chore(ci): add wget to llama-cpp docker image builder
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-03 22:48:41 +01:00
Ettore Di Giacinto
1d1d52da59
chore(ci): small fixups to build arm64 images
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-03 21:42:33 +01:00
Ettore Di Giacinto
46b1a1848f
chore(ci): minor fixup
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-03 16:47:31 +01:00
LocalAI [bot]
957eea3da3
chore: ⬆️ Update ggml-org/llama.cpp to 61bde8e21f4a1f9a98c9205831ca3e55457b4c78 ( #7415 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-12-03 16:27:12 +01:00
Ettore Di Giacinto
ab4f2742a6
chore(ci): minor fixup
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-03 16:26:33 +01:00
Ettore Di Giacinto
03f3bf2d94
chore(ci): only install runtime libs needed on arm64
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-03 15:13:21 +01:00
Ettore Di Giacinto
8dfeea2f55
fix: use ubuntu 24.04 for cuda13 l4t images ( #7418 )
...
* fix: use ubuntu 24.04 for cuda13 l4t images
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Drop openblas from containers
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-03 09:47:03 +01:00
Ettore Di Giacinto
fea9018dc5
Revert "feat(stablediffusion): Passthrough more parameters to support z-image and flux2" ( #7417 )
...
Revert "feat(stablediffusion): Passthrough more parameters to support z-image…"
This reverts commit 4018e59b2a .
2025-12-02 22:14:28 +01:00
Richard Palethorpe
4018e59b2a
feat(stablediffusion): Passthrough more parameters to support z-image and flux2 ( #7414 )
...
Signed-off-by: Richard Palethorpe <io@richiejp.com >
2025-12-02 18:28:26 +01:00
Richard Palethorpe
aaece6685f
chore(deps/stable-diffusion-ggml): update stablediffusion-ggml ( #7411 )
...
* ⬆️ Update leejet/stable-diffusion.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
* fix(stablediffusion-ggml): fixup schedulers and samplers arrays, use default getters
Signed-off-by: Richard Palethorpe <io@richiejp.com >
---------
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Signed-off-by: Richard Palethorpe <io@richiejp.com >
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-12-02 16:35:39 +01:00
Ettore Di Giacinto
f5df806f35
Fixup tags
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-02 15:15:41 +01:00
Ettore Di Giacinto
cfd95745ed
feat: add cuda13 images ( #7404 )
...
* chore(ci): add cuda13 jobs
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Add to pipelines and to capabilities. Start to work on the gallery
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* gallery
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* capabilities: try to detect by looking at /usr/local
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* neutts
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* backends.yaml
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* add cuda13 l4t requirements.txt
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* add cuda13 requirements.txt
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Fixups
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Pin vllm
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* Not all backends are compatible
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* add vllm to requirements
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
* vllm is not pre-compiled for cuda 13
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
---------
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-02 14:24:35 +01:00
LocalAI [bot]
665441ca94
chore: ⬆️ Update ggml-org/llama.cpp to ec18edfcba94dacb166e6523612fc0129cead67a ( #7406 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-12-02 07:59:52 +01:00
Ettore Di Giacinto
e3bcba5c45
chore: ⬆️ Update ggml-org/llama.cpp to 7f8ef50cce40e3e7e4526a3696cb45658190e69a ( #7402 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-12-01 07:50:40 +01:00
LocalAI [bot]
0824fd8efd
chore: ⬆️ Update ggml-org/llama.cpp to 8c32d9d96d9ae345a0150cae8572859e9aafea0b ( #7395 )
...
⬆️ Update ggml-org/llama.cpp
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com >
2025-11-30 09:06:18 +01:00
Ettore Di Giacinto
468ac608f3
chore(deps): bump llama.cpp to 'd82b7a7c1d73c0674698d9601b1bbb0200933f29' ( #7392 )
...
Signed-off-by: Ettore Di Giacinto <mudler@localai.io >
2025-11-29 08:58:07 +01:00
Ettore Di Giacinto
4b5977f535
chore: drop pinning of python 3.12 ( #7389 )
...
Update install.sh
Signed-off-by: Ettore Di Giacinto <mudler@users.noreply.github.com >
2025-11-28 11:02:56 +01:00
Ettore Di Giacinto
0d877b1e71
Revert "chore(l4t): Update extra index URL for requirements-l4t.txt" ( #7388 )
...
Revert "chore(l4t): Update extra index URL for requirements-l4t.txt (#7383 )"
This reverts commit 0d781e6b7e .
2025-11-28 11:02:11 +01:00