mirror of
https://github.com/trycua/computer.git
synced 2026-01-05 12:59:58 -06:00
local model docs
This commit is contained in:
36
docs/content/docs/home/agent-sdk/local-models.mdx
Normal file
36
docs/content/docs/home/agent-sdk/local-models.mdx
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
title: Running Models Locally
|
||||
---
|
||||
|
||||
You can run open-source LLMs and vision models on your own machine using c/ua, without relying on cloud APIs. This is ideal for development, privacy, or running on air-gapped systems.
|
||||
|
||||
## Hugging Face (transformers)
|
||||
|
||||
Use the `huggingface-local/` prefix to run any Hugging Face model locally via the `transformers` library. This supports most text and vision models from the Hugging Face Hub.
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
model = "huggingface-local/ByteDance-Seed/UI-TARS-1.5-7B"
|
||||
```
|
||||
|
||||
## MLX (Apple Silicon)
|
||||
|
||||
Use the `mlx/` prefix to run models using the `mlx-vlm` library, optimized for Apple Silicon (M1/M2/M3). This allows fast, local inference for many open-source models.
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
model = "mlx/mlx-community/UI-TARS-1.5-7B-6bit"
|
||||
```
|
||||
|
||||
## Ollama
|
||||
|
||||
Use the `ollama_chat/` prefix to run models using the `ollama` library. This allows fast, local inference for many open-source models.
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
model = "omniparser+ollama_chat/llama3.2:latest"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
For details on all supported providers, see [Supported Agents](./supported-agents).
|
||||
@@ -7,6 +7,7 @@
|
||||
"chat-history",
|
||||
"callbacks",
|
||||
"sandboxed-tools",
|
||||
"local-models",
|
||||
"migration-guide"
|
||||
]
|
||||
}
|
||||
|
||||
@@ -6,6 +6,8 @@ This page lists all supported agent loops and their compatible models/configurat
|
||||
|
||||
All agent loops are compatible with any LLM provider supported by LiteLLM.
|
||||
|
||||
See [Running Models Locally](./local-models) for how to use Hugging Face and MLX models on your own machine.
|
||||
|
||||
## Anthropic CUAs
|
||||
|
||||
- Claude 4: `claude-opus-4-20250514`, `claude-sonnet-4-20250514`
|
||||
|
||||
@@ -20,7 +20,7 @@ c/ua combines Computer (interface) + Agent (AI) for automating desktop apps. Com
|
||||
## 3. Install c/ua
|
||||
|
||||
```bash
|
||||
pip install "cua-agent2[all]" cua-computer
|
||||
pip install "cua-agent[all]" cua-computer
|
||||
```
|
||||
|
||||
## 4. Using Computer
|
||||
@@ -45,7 +45,7 @@ async with Computer(
|
||||
## 5. Using Agent
|
||||
|
||||
```python
|
||||
from agent2 import ComputerAgent
|
||||
from agent import ComputerAgent
|
||||
|
||||
agent = ComputerAgent(
|
||||
model="anthropic/claude-3-5-sonnet-20241022",
|
||||
|
||||
@@ -20,7 +20,7 @@ c/ua combines Computer (interface) + Agent (AI) for automating desktop apps. The
|
||||
## 3. Install c/ua
|
||||
|
||||
```bash
|
||||
pip install "cua-agent2[all]" cua-computer
|
||||
pip install "cua-agent[all]" cua-computer
|
||||
```
|
||||
|
||||
## 4. Run the Agent UI
|
||||
|
||||
Reference in New Issue
Block a user